00:00:00.001 Started by upstream project "autotest-per-patch" build number 132333 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:18.397 The recommended git tool is: git 00:00:18.398 using credential 00000000-0000-0000-0000-000000000002 00:00:18.400 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:18.410 Fetching changes from the remote Git repository 00:00:18.413 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:18.425 Using shallow fetch with depth 1 00:00:18.426 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:18.426 > git --version # timeout=10 00:00:18.436 > git --version # 'git version 2.39.2' 00:00:18.436 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:18.448 Setting http proxy: proxy-dmz.intel.com:911 00:00:18.448 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:23.992 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:24.007 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:24.022 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:24.022 > git config core.sparsecheckout # timeout=10 00:00:24.037 > git read-tree -mu HEAD # timeout=10 00:00:24.056 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:24.085 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:24.085 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:24.201 [Pipeline] Start of Pipeline 00:00:24.215 [Pipeline] library 00:00:24.217 Loading library shm_lib@master 00:00:24.218 Library shm_lib@master is cached. Copying from home. 00:00:24.235 [Pipeline] node 00:00:24.245 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:24.247 [Pipeline] { 00:00:24.258 [Pipeline] catchError 00:00:24.260 [Pipeline] { 00:00:24.274 [Pipeline] wrap 00:00:24.284 [Pipeline] { 00:00:24.293 [Pipeline] stage 00:00:24.295 [Pipeline] { (Prologue) 00:00:24.497 [Pipeline] sh 00:00:24.791 + logger -p user.info -t JENKINS-CI 00:00:24.814 [Pipeline] echo 00:00:24.816 Node: CYP9 00:00:24.826 [Pipeline] sh 00:00:25.140 [Pipeline] setCustomBuildProperty 00:00:25.152 [Pipeline] echo 00:00:25.153 Cleanup processes 00:00:25.158 [Pipeline] sh 00:00:25.450 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:25.450 1659165 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:25.465 [Pipeline] sh 00:00:25.757 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:25.757 ++ grep -v 'sudo pgrep' 00:00:25.758 ++ awk '{print $1}' 00:00:25.758 + sudo kill -9 00:00:25.758 + true 00:00:25.774 [Pipeline] cleanWs 00:00:25.786 [WS-CLEANUP] Deleting project workspace... 00:00:25.786 [WS-CLEANUP] Deferred wipeout is used... 00:00:25.793 [WS-CLEANUP] done 00:00:25.798 [Pipeline] setCustomBuildProperty 00:00:25.813 [Pipeline] sh 00:00:26.104 + sudo git config --global --replace-all safe.directory '*' 00:00:26.200 [Pipeline] httpRequest 00:00:26.649 [Pipeline] echo 00:00:26.651 Sorcerer 10.211.164.20 is alive 00:00:26.663 [Pipeline] retry 00:00:26.665 [Pipeline] { 00:00:26.683 [Pipeline] httpRequest 00:00:26.688 HttpMethod: GET 00:00:26.689 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.690 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.712 Response Code: HTTP/1.1 200 OK 00:00:26.712 Success: Status code 200 is in the accepted range: 200,404 00:00:26.713 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.781 [Pipeline] } 00:00:30.798 [Pipeline] // retry 00:00:30.805 [Pipeline] sh 00:00:31.097 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.118 [Pipeline] httpRequest 00:00:31.490 [Pipeline] echo 00:00:31.492 Sorcerer 10.211.164.20 is alive 00:00:31.501 [Pipeline] retry 00:00:31.503 [Pipeline] { 00:00:31.517 [Pipeline] httpRequest 00:00:31.522 HttpMethod: GET 00:00:31.522 URL: http://10.211.164.20/packages/spdk_8d982eda9913058eb14f8150efac38104d171f37.tar.gz 00:00:31.524 Sending request to url: http://10.211.164.20/packages/spdk_8d982eda9913058eb14f8150efac38104d171f37.tar.gz 00:00:31.550 Response Code: HTTP/1.1 200 OK 00:00:31.550 Success: Status code 200 is in the accepted range: 200,404 00:00:31.551 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8d982eda9913058eb14f8150efac38104d171f37.tar.gz 00:01:40.651 [Pipeline] } 00:01:40.668 [Pipeline] // retry 00:01:40.675 [Pipeline] sh 00:01:40.968 + tar --no-same-owner -xf spdk_8d982eda9913058eb14f8150efac38104d171f37.tar.gz 00:01:44.285 [Pipeline] sh 00:01:44.575 + git -C spdk log --oneline -n5 00:01:44.575 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:01:44.575 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:44.575 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:01:44.575 029355612 bdev_ut: add manual examine bdev unit test case 00:01:44.575 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:01:44.588 [Pipeline] } 00:01:44.601 [Pipeline] // stage 00:01:44.611 [Pipeline] stage 00:01:44.614 [Pipeline] { (Prepare) 00:01:44.630 [Pipeline] writeFile 00:01:44.646 [Pipeline] sh 00:01:44.937 + logger -p user.info -t JENKINS-CI 00:01:44.962 [Pipeline] sh 00:01:45.316 + logger -p user.info -t JENKINS-CI 00:01:45.348 [Pipeline] sh 00:01:45.639 + cat autorun-spdk.conf 00:01:45.639 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.639 SPDK_TEST_NVMF=1 00:01:45.639 SPDK_TEST_NVME_CLI=1 00:01:45.639 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.639 SPDK_TEST_NVMF_NICS=e810 00:01:45.639 SPDK_TEST_VFIOUSER=1 00:01:45.639 SPDK_RUN_UBSAN=1 00:01:45.639 NET_TYPE=phy 00:01:45.648 RUN_NIGHTLY=0 00:01:45.654 [Pipeline] readFile 00:01:45.681 [Pipeline] withEnv 00:01:45.684 [Pipeline] { 00:01:45.699 [Pipeline] sh 00:01:45.992 + set -ex 00:01:45.992 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:45.992 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:45.992 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.992 ++ SPDK_TEST_NVMF=1 00:01:45.992 ++ SPDK_TEST_NVME_CLI=1 00:01:45.992 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.992 ++ SPDK_TEST_NVMF_NICS=e810 00:01:45.992 ++ SPDK_TEST_VFIOUSER=1 00:01:45.992 ++ SPDK_RUN_UBSAN=1 00:01:45.992 ++ NET_TYPE=phy 00:01:45.992 ++ RUN_NIGHTLY=0 00:01:45.992 + case $SPDK_TEST_NVMF_NICS in 00:01:45.992 + DRIVERS=ice 00:01:45.992 + [[ tcp == \r\d\m\a ]] 00:01:45.992 + [[ -n ice ]] 00:01:45.992 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:45.992 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:45.992 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:45.992 rmmod: ERROR: Module irdma is not currently loaded 00:01:45.992 rmmod: ERROR: Module i40iw is not currently loaded 00:01:45.992 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:45.992 + true 00:01:45.992 + for D in $DRIVERS 00:01:45.992 + sudo modprobe ice 00:01:45.992 + exit 0 00:01:46.002 [Pipeline] } 00:01:46.017 [Pipeline] // withEnv 00:01:46.022 [Pipeline] } 00:01:46.036 [Pipeline] // stage 00:01:46.046 [Pipeline] catchError 00:01:46.048 [Pipeline] { 00:01:46.060 [Pipeline] timeout 00:01:46.060 Timeout set to expire in 1 hr 0 min 00:01:46.062 [Pipeline] { 00:01:46.075 [Pipeline] stage 00:01:46.077 [Pipeline] { (Tests) 00:01:46.091 [Pipeline] sh 00:01:46.383 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.383 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.383 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.383 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:46.383 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.383 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:46.383 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:46.383 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:46.383 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:46.383 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:46.383 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:46.383 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.383 + source /etc/os-release 00:01:46.383 ++ NAME='Fedora Linux' 00:01:46.383 ++ VERSION='39 (Cloud Edition)' 00:01:46.383 ++ ID=fedora 00:01:46.383 ++ VERSION_ID=39 00:01:46.383 ++ VERSION_CODENAME= 00:01:46.383 ++ PLATFORM_ID=platform:f39 00:01:46.383 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:46.383 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:46.383 ++ LOGO=fedora-logo-icon 00:01:46.383 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:46.383 ++ HOME_URL=https://fedoraproject.org/ 00:01:46.383 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:46.383 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:46.383 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:46.383 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:46.383 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:46.383 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:46.383 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:46.383 ++ SUPPORT_END=2024-11-12 00:01:46.383 ++ VARIANT='Cloud Edition' 00:01:46.384 ++ VARIANT_ID=cloud 00:01:46.384 + uname -a 00:01:46.384 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:46.384 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:49.688 Hugepages 00:01:49.688 node hugesize free / total 00:01:49.688 node0 1048576kB 0 / 0 00:01:49.688 node0 2048kB 0 / 0 00:01:49.688 node1 1048576kB 0 / 0 00:01:49.688 node1 2048kB 0 / 0 00:01:49.688 00:01:49.688 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:49.688 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:49.688 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:49.688 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:49.688 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:49.688 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:49.688 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:49.688 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:49.688 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:49.688 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:49.688 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:49.688 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:49.688 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:49.688 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:49.688 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:49.688 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:49.688 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:49.688 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:49.688 + rm -f /tmp/spdk-ld-path 00:01:49.688 + source autorun-spdk.conf 00:01:49.688 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.688 ++ SPDK_TEST_NVMF=1 00:01:49.688 ++ SPDK_TEST_NVME_CLI=1 00:01:49.688 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.688 ++ SPDK_TEST_NVMF_NICS=e810 00:01:49.688 ++ SPDK_TEST_VFIOUSER=1 00:01:49.688 ++ SPDK_RUN_UBSAN=1 00:01:49.688 ++ NET_TYPE=phy 00:01:49.688 ++ RUN_NIGHTLY=0 00:01:49.688 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:49.688 + [[ -n '' ]] 00:01:49.689 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.689 + for M in /var/spdk/build-*-manifest.txt 00:01:49.689 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:49.689 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.689 + for M in /var/spdk/build-*-manifest.txt 00:01:49.689 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:49.689 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.689 + for M in /var/spdk/build-*-manifest.txt 00:01:49.689 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:49.689 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.689 ++ uname 00:01:49.689 + [[ Linux == \L\i\n\u\x ]] 00:01:49.689 + sudo dmesg -T 00:01:49.689 + sudo dmesg --clear 00:01:49.689 + dmesg_pid=1660711 00:01:49.689 + [[ Fedora Linux == FreeBSD ]] 00:01:49.689 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.689 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.689 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.689 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:49.689 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:49.689 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.689 + sudo dmesg -Tw 00:01:49.689 + export FIO_BIN=/usr/src/fio-static/fio 00:01:49.689 + FIO_BIN=/usr/src/fio-static/fio 00:01:49.689 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.689 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.689 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.689 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.689 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.689 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.689 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.689 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.689 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.689 18:00:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:49.689 18:00:51 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.689 18:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.689 18:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:49.689 18:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:49.689 18:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.689 18:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:49.689 18:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:49.689 18:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:49.689 18:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:49.689 18:00:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:49.689 18:00:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:49.689 18:00:51 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.952 18:00:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:49.952 18:00:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:49.952 18:00:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:49.952 18:00:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.952 18:00:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.952 18:00:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.952 18:00:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.952 18:00:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.952 18:00:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.952 18:00:51 -- paths/export.sh@5 -- $ export PATH 00:01:49.952 18:00:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.952 18:00:51 -- common/autobuild_common.sh@486 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:49.952 18:00:51 -- common/autobuild_common.sh@487 -- $ date +%s 00:01:49.952 18:00:51 -- common/autobuild_common.sh@487 -- $ mktemp -dt spdk_1732035651.XXXXXX 00:01:49.952 18:00:51 -- common/autobuild_common.sh@487 -- $ SPDK_WORKSPACE=/tmp/spdk_1732035651.DcoKVa 00:01:49.952 18:00:51 -- common/autobuild_common.sh@489 -- $ [[ -n '' ]] 00:01:49.952 18:00:51 -- common/autobuild_common.sh@493 -- $ '[' -n '' ']' 00:01:49.952 18:00:51 -- common/autobuild_common.sh@496 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:49.952 18:00:51 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:49.952 18:00:51 -- common/autobuild_common.sh@502 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.952 18:00:51 -- common/autobuild_common.sh@503 -- $ get_config_params 00:01:49.952 18:00:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:49.952 18:00:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.952 18:00:51 -- common/autobuild_common.sh@503 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:49.952 18:00:51 -- common/autobuild_common.sh@505 -- $ start_monitor_resources 00:01:49.952 18:00:51 -- pm/common@17 -- $ local monitor 00:01:49.952 18:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.952 18:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.952 18:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.952 18:00:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.952 18:00:51 -- pm/common@21 -- $ date +%s 00:01:49.952 18:00:51 -- pm/common@21 -- $ date +%s 00:01:49.952 18:00:51 -- pm/common@25 -- $ sleep 1 00:01:49.952 18:00:51 -- pm/common@21 -- $ date +%s 00:01:49.952 18:00:51 -- pm/common@21 -- $ date +%s 00:01:49.952 18:00:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732035651 00:01:49.952 18:00:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732035651 00:01:49.952 18:00:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732035651 00:01:49.952 18:00:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732035651 00:01:49.952 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732035651_collect-cpu-load.pm.log 00:01:49.952 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732035651_collect-vmstat.pm.log 00:01:49.952 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732035651_collect-cpu-temp.pm.log 00:01:49.952 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732035651_collect-bmc-pm.bmc.pm.log 00:01:50.898 18:00:52 -- common/autobuild_common.sh@506 -- $ trap stop_monitor_resources EXIT 00:01:50.898 18:00:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.898 18:00:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.898 18:00:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.898 18:00:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.898 Tue Nov 19 05:00:52 PM UTC 2024 00:01:50.898 18:00:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.898 v25.01-pre-198-g8d982eda9 00:01:50.898 18:00:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:50.898 18:00:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.898 18:00:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.898 18:00:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:50.898 18:00:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:50.898 18:00:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.898 ************************************ 00:01:50.898 START TEST ubsan 00:01:50.898 ************************************ 00:01:50.898 18:00:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:50.898 using ubsan 00:01:50.898 00:01:50.898 real 0m0.001s 00:01:50.898 user 0m0.000s 00:01:50.898 sys 0m0.000s 00:01:50.898 18:00:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:50.898 18:00:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.898 ************************************ 00:01:50.898 END TEST ubsan 00:01:50.898 ************************************ 00:01:51.160 18:00:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:51.160 18:00:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:51.160 18:00:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:51.160 18:00:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:51.160 18:00:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:51.160 18:00:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:51.160 18:00:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:51.160 18:00:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:51.160 18:00:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:51.160 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:51.160 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.733 Using 'verbs' RDMA provider 00:02:07.217 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:22.127 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:22.127 Creating mk/config.mk...done. 00:02:22.127 Creating mk/cc.flags.mk...done. 00:02:22.127 Type 'make' to build. 00:02:22.127 18:01:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:22.127 18:01:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.127 18:01:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.127 18:01:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.127 ************************************ 00:02:22.127 START TEST make 00:02:22.127 ************************************ 00:02:22.127 18:01:21 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:22.127 make[1]: Nothing to be done for 'all'. 00:02:22.127 The Meson build system 00:02:22.127 Version: 1.5.0 00:02:22.127 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:22.127 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:22.128 Build type: native build 00:02:22.128 Project name: libvfio-user 00:02:22.128 Project version: 0.0.1 00:02:22.128 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:22.128 C linker for the host machine: cc ld.bfd 2.40-14 00:02:22.128 Host machine cpu family: x86_64 00:02:22.128 Host machine cpu: x86_64 00:02:22.128 Run-time dependency threads found: YES 00:02:22.128 Library dl found: YES 00:02:22.128 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:22.128 Run-time dependency json-c found: YES 0.17 00:02:22.128 Run-time dependency cmocka found: YES 1.1.7 00:02:22.128 Program pytest-3 found: NO 00:02:22.128 Program flake8 found: NO 00:02:22.128 Program misspell-fixer found: NO 00:02:22.128 Program restructuredtext-lint found: NO 00:02:22.128 Program valgrind found: YES (/usr/bin/valgrind) 00:02:22.128 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.128 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.128 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.128 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:22.128 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:22.128 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:22.128 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:22.128 Build targets in project: 8 00:02:22.128 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:22.128 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:22.128 00:02:22.128 libvfio-user 0.0.1 00:02:22.128 00:02:22.128 User defined options 00:02:22.128 buildtype : debug 00:02:22.128 default_library: shared 00:02:22.128 libdir : /usr/local/lib 00:02:22.128 00:02:22.128 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.701 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:22.701 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:22.701 [2/37] Compiling C object samples/null.p/null.c.o 00:02:22.701 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:22.701 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:22.701 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:22.701 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:22.701 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:22.701 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:22.701 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:22.701 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:22.701 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:22.701 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:22.701 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:22.701 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:22.701 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:22.701 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:22.701 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:22.701 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:22.701 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:22.701 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:22.701 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:22.701 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:22.701 [23/37] Compiling C object samples/server.p/server.c.o 00:02:22.701 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:22.701 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:22.962 [26/37] Compiling C object samples/client.p/client.c.o 00:02:22.962 [27/37] Linking target samples/client 00:02:22.962 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:22.962 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:22.962 [30/37] Linking target test/unit_tests 00:02:22.962 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:22.962 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:23.222 [33/37] Linking target samples/server 00:02:23.222 [34/37] Linking target samples/gpio-pci-idio-16 00:02:23.223 [35/37] Linking target samples/lspci 00:02:23.223 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:23.223 [37/37] Linking target samples/null 00:02:23.223 INFO: autodetecting backend as ninja 00:02:23.223 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:23.223 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:23.483 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:23.483 ninja: no work to do. 00:02:30.076 The Meson build system 00:02:30.076 Version: 1.5.0 00:02:30.076 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:30.076 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:30.076 Build type: native build 00:02:30.076 Program cat found: YES (/usr/bin/cat) 00:02:30.076 Project name: DPDK 00:02:30.076 Project version: 24.03.0 00:02:30.076 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:30.076 C linker for the host machine: cc ld.bfd 2.40-14 00:02:30.076 Host machine cpu family: x86_64 00:02:30.076 Host machine cpu: x86_64 00:02:30.076 Message: ## Building in Developer Mode ## 00:02:30.076 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:30.076 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:30.076 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:30.076 Program python3 found: YES (/usr/bin/python3) 00:02:30.076 Program cat found: YES (/usr/bin/cat) 00:02:30.076 Compiler for C supports arguments -march=native: YES 00:02:30.076 Checking for size of "void *" : 8 00:02:30.076 Checking for size of "void *" : 8 (cached) 00:02:30.076 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:30.076 Library m found: YES 00:02:30.076 Library numa found: YES 00:02:30.076 Has header "numaif.h" : YES 00:02:30.076 Library fdt found: NO 00:02:30.076 Library execinfo found: NO 00:02:30.076 Has header "execinfo.h" : YES 00:02:30.076 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:30.076 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:30.076 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:30.076 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:30.076 Run-time dependency openssl found: YES 3.1.1 00:02:30.076 Run-time dependency libpcap found: YES 1.10.4 00:02:30.076 Has header "pcap.h" with dependency libpcap: YES 00:02:30.076 Compiler for C supports arguments -Wcast-qual: YES 00:02:30.076 Compiler for C supports arguments -Wdeprecated: YES 00:02:30.076 Compiler for C supports arguments -Wformat: YES 00:02:30.076 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:30.076 Compiler for C supports arguments -Wformat-security: NO 00:02:30.076 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:30.076 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:30.076 Compiler for C supports arguments -Wnested-externs: YES 00:02:30.076 Compiler for C supports arguments -Wold-style-definition: YES 00:02:30.076 Compiler for C supports arguments -Wpointer-arith: YES 00:02:30.076 Compiler for C supports arguments -Wsign-compare: YES 00:02:30.076 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:30.076 Compiler for C supports arguments -Wundef: YES 00:02:30.076 Compiler for C supports arguments -Wwrite-strings: YES 00:02:30.076 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:30.076 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:30.076 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:30.076 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:30.076 Program objdump found: YES (/usr/bin/objdump) 00:02:30.076 Compiler for C supports arguments -mavx512f: YES 00:02:30.076 Checking if "AVX512 checking" compiles: YES 00:02:30.076 Fetching value of define "__SSE4_2__" : 1 00:02:30.076 Fetching value of define "__AES__" : 1 00:02:30.076 Fetching value of define "__AVX__" : 1 00:02:30.076 Fetching value of define "__AVX2__" : 1 00:02:30.076 Fetching value of define "__AVX512BW__" : 1 00:02:30.076 Fetching value of define "__AVX512CD__" : 1 00:02:30.076 Fetching value of define "__AVX512DQ__" : 1 00:02:30.076 Fetching value of define "__AVX512F__" : 1 00:02:30.076 Fetching value of define "__AVX512VL__" : 1 00:02:30.076 Fetching value of define "__PCLMUL__" : 1 00:02:30.076 Fetching value of define "__RDRND__" : 1 00:02:30.076 Fetching value of define "__RDSEED__" : 1 00:02:30.076 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:30.076 Fetching value of define "__znver1__" : (undefined) 00:02:30.076 Fetching value of define "__znver2__" : (undefined) 00:02:30.076 Fetching value of define "__znver3__" : (undefined) 00:02:30.076 Fetching value of define "__znver4__" : (undefined) 00:02:30.076 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:30.076 Message: lib/log: Defining dependency "log" 00:02:30.076 Message: lib/kvargs: Defining dependency "kvargs" 00:02:30.076 Message: lib/telemetry: Defining dependency "telemetry" 00:02:30.076 Checking for function "getentropy" : NO 00:02:30.076 Message: lib/eal: Defining dependency "eal" 00:02:30.076 Message: lib/ring: Defining dependency "ring" 00:02:30.076 Message: lib/rcu: Defining dependency "rcu" 00:02:30.076 Message: lib/mempool: Defining dependency "mempool" 00:02:30.076 Message: lib/mbuf: Defining dependency "mbuf" 00:02:30.076 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:30.076 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:30.076 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:30.076 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:30.076 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:30.076 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:30.076 Compiler for C supports arguments -mpclmul: YES 00:02:30.076 Compiler for C supports arguments -maes: YES 00:02:30.076 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:30.076 Compiler for C supports arguments -mavx512bw: YES 00:02:30.076 Compiler for C supports arguments -mavx512dq: YES 00:02:30.076 Compiler for C supports arguments -mavx512vl: YES 00:02:30.076 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:30.076 Compiler for C supports arguments -mavx2: YES 00:02:30.076 Compiler for C supports arguments -mavx: YES 00:02:30.076 Message: lib/net: Defining dependency "net" 00:02:30.076 Message: lib/meter: Defining dependency "meter" 00:02:30.076 Message: lib/ethdev: Defining dependency "ethdev" 00:02:30.076 Message: lib/pci: Defining dependency "pci" 00:02:30.076 Message: lib/cmdline: Defining dependency "cmdline" 00:02:30.076 Message: lib/hash: Defining dependency "hash" 00:02:30.076 Message: lib/timer: Defining dependency "timer" 00:02:30.076 Message: lib/compressdev: Defining dependency "compressdev" 00:02:30.076 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:30.076 Message: lib/dmadev: Defining dependency "dmadev" 00:02:30.076 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:30.076 Message: lib/power: Defining dependency "power" 00:02:30.076 Message: lib/reorder: Defining dependency "reorder" 00:02:30.076 Message: lib/security: Defining dependency "security" 00:02:30.076 Has header "linux/userfaultfd.h" : YES 00:02:30.076 Has header "linux/vduse.h" : YES 00:02:30.076 Message: lib/vhost: Defining dependency "vhost" 00:02:30.076 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:30.076 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:30.076 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:30.076 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:30.076 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:30.076 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:30.076 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:30.076 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:30.076 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:30.076 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:30.076 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:30.076 Configuring doxy-api-html.conf using configuration 00:02:30.076 Configuring doxy-api-man.conf using configuration 00:02:30.076 Program mandb found: YES (/usr/bin/mandb) 00:02:30.076 Program sphinx-build found: NO 00:02:30.076 Configuring rte_build_config.h using configuration 00:02:30.076 Message: 00:02:30.076 ================= 00:02:30.076 Applications Enabled 00:02:30.076 ================= 00:02:30.076 00:02:30.076 apps: 00:02:30.076 00:02:30.076 00:02:30.076 Message: 00:02:30.076 ================= 00:02:30.076 Libraries Enabled 00:02:30.076 ================= 00:02:30.076 00:02:30.076 libs: 00:02:30.076 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:30.076 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:30.076 cryptodev, dmadev, power, reorder, security, vhost, 00:02:30.076 00:02:30.076 Message: 00:02:30.076 =============== 00:02:30.076 Drivers Enabled 00:02:30.076 =============== 00:02:30.076 00:02:30.076 common: 00:02:30.076 00:02:30.076 bus: 00:02:30.076 pci, vdev, 00:02:30.076 mempool: 00:02:30.076 ring, 00:02:30.076 dma: 00:02:30.076 00:02:30.076 net: 00:02:30.076 00:02:30.076 crypto: 00:02:30.076 00:02:30.076 compress: 00:02:30.076 00:02:30.076 vdpa: 00:02:30.076 00:02:30.076 00:02:30.076 Message: 00:02:30.076 ================= 00:02:30.076 Content Skipped 00:02:30.076 ================= 00:02:30.076 00:02:30.076 apps: 00:02:30.076 dumpcap: explicitly disabled via build config 00:02:30.076 graph: explicitly disabled via build config 00:02:30.076 pdump: explicitly disabled via build config 00:02:30.076 proc-info: explicitly disabled via build config 00:02:30.076 test-acl: explicitly disabled via build config 00:02:30.076 test-bbdev: explicitly disabled via build config 00:02:30.076 test-cmdline: explicitly disabled via build config 00:02:30.076 test-compress-perf: explicitly disabled via build config 00:02:30.077 test-crypto-perf: explicitly disabled via build config 00:02:30.077 test-dma-perf: explicitly disabled via build config 00:02:30.077 test-eventdev: explicitly disabled via build config 00:02:30.077 test-fib: explicitly disabled via build config 00:02:30.077 test-flow-perf: explicitly disabled via build config 00:02:30.077 test-gpudev: explicitly disabled via build config 00:02:30.077 test-mldev: explicitly disabled via build config 00:02:30.077 test-pipeline: explicitly disabled via build config 00:02:30.077 test-pmd: explicitly disabled via build config 00:02:30.077 test-regex: explicitly disabled via build config 00:02:30.077 test-sad: explicitly disabled via build config 00:02:30.077 test-security-perf: explicitly disabled via build config 00:02:30.077 00:02:30.077 libs: 00:02:30.077 argparse: explicitly disabled via build config 00:02:30.077 metrics: explicitly disabled via build config 00:02:30.077 acl: explicitly disabled via build config 00:02:30.077 bbdev: explicitly disabled via build config 00:02:30.077 bitratestats: explicitly disabled via build config 00:02:30.077 bpf: explicitly disabled via build config 00:02:30.077 cfgfile: explicitly disabled via build config 00:02:30.077 distributor: explicitly disabled via build config 00:02:30.077 efd: explicitly disabled via build config 00:02:30.077 eventdev: explicitly disabled via build config 00:02:30.077 dispatcher: explicitly disabled via build config 00:02:30.077 gpudev: explicitly disabled via build config 00:02:30.077 gro: explicitly disabled via build config 00:02:30.077 gso: explicitly disabled via build config 00:02:30.077 ip_frag: explicitly disabled via build config 00:02:30.077 jobstats: explicitly disabled via build config 00:02:30.077 latencystats: explicitly disabled via build config 00:02:30.077 lpm: explicitly disabled via build config 00:02:30.077 member: explicitly disabled via build config 00:02:30.077 pcapng: explicitly disabled via build config 00:02:30.077 rawdev: explicitly disabled via build config 00:02:30.077 regexdev: explicitly disabled via build config 00:02:30.077 mldev: explicitly disabled via build config 00:02:30.077 rib: explicitly disabled via build config 00:02:30.077 sched: explicitly disabled via build config 00:02:30.077 stack: explicitly disabled via build config 00:02:30.077 ipsec: explicitly disabled via build config 00:02:30.077 pdcp: explicitly disabled via build config 00:02:30.077 fib: explicitly disabled via build config 00:02:30.077 port: explicitly disabled via build config 00:02:30.077 pdump: explicitly disabled via build config 00:02:30.077 table: explicitly disabled via build config 00:02:30.077 pipeline: explicitly disabled via build config 00:02:30.077 graph: explicitly disabled via build config 00:02:30.077 node: explicitly disabled via build config 00:02:30.077 00:02:30.077 drivers: 00:02:30.077 common/cpt: not in enabled drivers build config 00:02:30.077 common/dpaax: not in enabled drivers build config 00:02:30.077 common/iavf: not in enabled drivers build config 00:02:30.077 common/idpf: not in enabled drivers build config 00:02:30.077 common/ionic: not in enabled drivers build config 00:02:30.077 common/mvep: not in enabled drivers build config 00:02:30.077 common/octeontx: not in enabled drivers build config 00:02:30.077 bus/auxiliary: not in enabled drivers build config 00:02:30.077 bus/cdx: not in enabled drivers build config 00:02:30.077 bus/dpaa: not in enabled drivers build config 00:02:30.077 bus/fslmc: not in enabled drivers build config 00:02:30.077 bus/ifpga: not in enabled drivers build config 00:02:30.077 bus/platform: not in enabled drivers build config 00:02:30.077 bus/uacce: not in enabled drivers build config 00:02:30.077 bus/vmbus: not in enabled drivers build config 00:02:30.077 common/cnxk: not in enabled drivers build config 00:02:30.077 common/mlx5: not in enabled drivers build config 00:02:30.077 common/nfp: not in enabled drivers build config 00:02:30.077 common/nitrox: not in enabled drivers build config 00:02:30.077 common/qat: not in enabled drivers build config 00:02:30.077 common/sfc_efx: not in enabled drivers build config 00:02:30.077 mempool/bucket: not in enabled drivers build config 00:02:30.077 mempool/cnxk: not in enabled drivers build config 00:02:30.077 mempool/dpaa: not in enabled drivers build config 00:02:30.077 mempool/dpaa2: not in enabled drivers build config 00:02:30.077 mempool/octeontx: not in enabled drivers build config 00:02:30.077 mempool/stack: not in enabled drivers build config 00:02:30.077 dma/cnxk: not in enabled drivers build config 00:02:30.077 dma/dpaa: not in enabled drivers build config 00:02:30.077 dma/dpaa2: not in enabled drivers build config 00:02:30.077 dma/hisilicon: not in enabled drivers build config 00:02:30.077 dma/idxd: not in enabled drivers build config 00:02:30.077 dma/ioat: not in enabled drivers build config 00:02:30.077 dma/skeleton: not in enabled drivers build config 00:02:30.077 net/af_packet: not in enabled drivers build config 00:02:30.077 net/af_xdp: not in enabled drivers build config 00:02:30.077 net/ark: not in enabled drivers build config 00:02:30.077 net/atlantic: not in enabled drivers build config 00:02:30.077 net/avp: not in enabled drivers build config 00:02:30.077 net/axgbe: not in enabled drivers build config 00:02:30.077 net/bnx2x: not in enabled drivers build config 00:02:30.077 net/bnxt: not in enabled drivers build config 00:02:30.077 net/bonding: not in enabled drivers build config 00:02:30.077 net/cnxk: not in enabled drivers build config 00:02:30.077 net/cpfl: not in enabled drivers build config 00:02:30.077 net/cxgbe: not in enabled drivers build config 00:02:30.077 net/dpaa: not in enabled drivers build config 00:02:30.077 net/dpaa2: not in enabled drivers build config 00:02:30.077 net/e1000: not in enabled drivers build config 00:02:30.077 net/ena: not in enabled drivers build config 00:02:30.077 net/enetc: not in enabled drivers build config 00:02:30.077 net/enetfec: not in enabled drivers build config 00:02:30.077 net/enic: not in enabled drivers build config 00:02:30.077 net/failsafe: not in enabled drivers build config 00:02:30.077 net/fm10k: not in enabled drivers build config 00:02:30.077 net/gve: not in enabled drivers build config 00:02:30.077 net/hinic: not in enabled drivers build config 00:02:30.077 net/hns3: not in enabled drivers build config 00:02:30.077 net/i40e: not in enabled drivers build config 00:02:30.077 net/iavf: not in enabled drivers build config 00:02:30.077 net/ice: not in enabled drivers build config 00:02:30.077 net/idpf: not in enabled drivers build config 00:02:30.077 net/igc: not in enabled drivers build config 00:02:30.077 net/ionic: not in enabled drivers build config 00:02:30.077 net/ipn3ke: not in enabled drivers build config 00:02:30.077 net/ixgbe: not in enabled drivers build config 00:02:30.077 net/mana: not in enabled drivers build config 00:02:30.077 net/memif: not in enabled drivers build config 00:02:30.077 net/mlx4: not in enabled drivers build config 00:02:30.077 net/mlx5: not in enabled drivers build config 00:02:30.077 net/mvneta: not in enabled drivers build config 00:02:30.077 net/mvpp2: not in enabled drivers build config 00:02:30.077 net/netvsc: not in enabled drivers build config 00:02:30.077 net/nfb: not in enabled drivers build config 00:02:30.077 net/nfp: not in enabled drivers build config 00:02:30.077 net/ngbe: not in enabled drivers build config 00:02:30.077 net/null: not in enabled drivers build config 00:02:30.077 net/octeontx: not in enabled drivers build config 00:02:30.077 net/octeon_ep: not in enabled drivers build config 00:02:30.077 net/pcap: not in enabled drivers build config 00:02:30.077 net/pfe: not in enabled drivers build config 00:02:30.077 net/qede: not in enabled drivers build config 00:02:30.077 net/ring: not in enabled drivers build config 00:02:30.077 net/sfc: not in enabled drivers build config 00:02:30.077 net/softnic: not in enabled drivers build config 00:02:30.077 net/tap: not in enabled drivers build config 00:02:30.077 net/thunderx: not in enabled drivers build config 00:02:30.077 net/txgbe: not in enabled drivers build config 00:02:30.077 net/vdev_netvsc: not in enabled drivers build config 00:02:30.077 net/vhost: not in enabled drivers build config 00:02:30.077 net/virtio: not in enabled drivers build config 00:02:30.077 net/vmxnet3: not in enabled drivers build config 00:02:30.077 raw/*: missing internal dependency, "rawdev" 00:02:30.077 crypto/armv8: not in enabled drivers build config 00:02:30.077 crypto/bcmfs: not in enabled drivers build config 00:02:30.077 crypto/caam_jr: not in enabled drivers build config 00:02:30.077 crypto/ccp: not in enabled drivers build config 00:02:30.077 crypto/cnxk: not in enabled drivers build config 00:02:30.077 crypto/dpaa_sec: not in enabled drivers build config 00:02:30.077 crypto/dpaa2_sec: not in enabled drivers build config 00:02:30.077 crypto/ipsec_mb: not in enabled drivers build config 00:02:30.077 crypto/mlx5: not in enabled drivers build config 00:02:30.077 crypto/mvsam: not in enabled drivers build config 00:02:30.077 crypto/nitrox: not in enabled drivers build config 00:02:30.077 crypto/null: not in enabled drivers build config 00:02:30.077 crypto/octeontx: not in enabled drivers build config 00:02:30.077 crypto/openssl: not in enabled drivers build config 00:02:30.077 crypto/scheduler: not in enabled drivers build config 00:02:30.077 crypto/uadk: not in enabled drivers build config 00:02:30.077 crypto/virtio: not in enabled drivers build config 00:02:30.077 compress/isal: not in enabled drivers build config 00:02:30.077 compress/mlx5: not in enabled drivers build config 00:02:30.077 compress/nitrox: not in enabled drivers build config 00:02:30.077 compress/octeontx: not in enabled drivers build config 00:02:30.077 compress/zlib: not in enabled drivers build config 00:02:30.077 regex/*: missing internal dependency, "regexdev" 00:02:30.077 ml/*: missing internal dependency, "mldev" 00:02:30.077 vdpa/ifc: not in enabled drivers build config 00:02:30.077 vdpa/mlx5: not in enabled drivers build config 00:02:30.077 vdpa/nfp: not in enabled drivers build config 00:02:30.077 vdpa/sfc: not in enabled drivers build config 00:02:30.077 event/*: missing internal dependency, "eventdev" 00:02:30.077 baseband/*: missing internal dependency, "bbdev" 00:02:30.077 gpu/*: missing internal dependency, "gpudev" 00:02:30.077 00:02:30.077 00:02:30.077 Build targets in project: 84 00:02:30.077 00:02:30.077 DPDK 24.03.0 00:02:30.077 00:02:30.077 User defined options 00:02:30.077 buildtype : debug 00:02:30.077 default_library : shared 00:02:30.078 libdir : lib 00:02:30.078 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:30.078 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:30.078 c_link_args : 00:02:30.078 cpu_instruction_set: native 00:02:30.078 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:30.078 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:30.078 enable_docs : false 00:02:30.078 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:30.078 enable_kmods : false 00:02:30.078 max_lcores : 128 00:02:30.078 tests : false 00:02:30.078 00:02:30.078 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:30.078 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:30.078 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:30.078 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:30.078 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:30.078 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:30.078 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:30.078 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:30.078 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:30.078 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:30.078 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:30.078 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:30.078 [11/267] Linking static target lib/librte_kvargs.a 00:02:30.078 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:30.078 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:30.078 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:30.078 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:30.078 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.078 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:30.078 [18/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:30.078 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:30.078 [20/267] Linking static target lib/librte_log.a 00:02:30.078 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:30.078 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:30.078 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:30.078 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:30.078 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:30.078 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:30.078 [27/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:30.078 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:30.078 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:30.336 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:30.336 [31/267] Linking static target lib/librte_pci.a 00:02:30.336 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.336 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:30.336 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:30.336 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:30.336 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.336 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:30.336 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:30.336 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:30.336 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.596 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.596 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:30.596 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:30.596 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:30.596 [45/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:30.596 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:30.596 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:30.596 [48/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.596 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:30.596 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:30.597 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:30.597 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:30.597 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:30.597 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.597 [55/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.597 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:30.597 [57/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.597 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.597 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:30.597 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.597 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:30.597 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:30.597 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:30.597 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.597 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.597 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:30.597 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:30.597 [68/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:30.597 [69/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:30.597 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:30.597 [71/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:30.597 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:30.597 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.597 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.597 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:30.597 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:30.597 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:30.597 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:30.597 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.597 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:30.597 [81/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.597 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.597 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:30.597 [84/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.597 [85/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:30.597 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:30.597 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:30.597 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.597 [89/267] Linking static target lib/librte_meter.a 00:02:30.597 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.597 [91/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.597 [92/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:30.597 [93/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:30.597 [94/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.597 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:30.597 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.597 [97/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.597 [98/267] Linking static target lib/librte_ring.a 00:02:30.597 [99/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.597 [100/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:30.597 [101/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:30.597 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:30.597 [103/267] Linking static target lib/librte_cmdline.a 00:02:30.597 [104/267] Linking static target lib/librte_telemetry.a 00:02:30.597 [105/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:30.597 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.597 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:30.597 [108/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.597 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:30.597 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:30.597 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:30.597 [112/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:30.597 [113/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.597 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.597 [115/267] Linking static target lib/librte_timer.a 00:02:30.597 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.597 [117/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.597 [118/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.597 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:30.597 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:30.597 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.597 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:30.597 [123/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.597 [124/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:30.597 [125/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.597 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:30.597 [127/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:30.597 [128/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.597 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.597 [130/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:30.597 [131/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.597 [132/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.597 [133/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:30.597 [134/267] Linking static target lib/librte_rcu.a 00:02:30.597 [135/267] Linking static target lib/librte_dmadev.a 00:02:30.597 [136/267] Linking static target lib/librte_mempool.a 00:02:30.597 [137/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:30.597 [138/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:30.597 [139/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:30.597 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:30.597 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.597 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:30.597 [143/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.597 [144/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:30.597 [145/267] Linking static target lib/librte_net.a 00:02:30.597 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:30.859 [147/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.859 [148/267] Linking static target lib/librte_power.a 00:02:30.859 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:30.859 [150/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:30.859 [151/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.859 [152/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.859 [153/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:30.859 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:30.859 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:30.859 [156/267] Linking static target lib/librte_reorder.a 00:02:30.859 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.859 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:30.859 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.859 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:30.859 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.859 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.859 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:30.859 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.859 [165/267] Linking static target lib/librte_compressdev.a 00:02:30.859 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:30.859 [167/267] Linking target lib/librte_log.so.24.1 00:02:30.859 [168/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:30.859 [169/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:30.859 [170/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:30.859 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:30.859 [172/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:30.859 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:30.859 [174/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:30.859 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:30.859 [176/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:30.859 [177/267] Linking static target lib/librte_eal.a 00:02:30.859 [178/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:30.859 [179/267] Linking static target lib/librte_security.a 00:02:30.859 [180/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.859 [181/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:30.859 [182/267] Linking static target lib/librte_mbuf.a 00:02:30.859 [183/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.859 [184/267] Linking static target lib/librte_hash.a 00:02:30.859 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:30.859 [186/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.859 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.859 [188/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.859 [189/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.859 [190/267] Linking static target drivers/librte_bus_vdev.a 00:02:30.859 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.859 [192/267] Linking target lib/librte_kvargs.so.24.1 00:02:30.859 [193/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:31.121 [194/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.121 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.121 [196/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:31.121 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:31.121 [198/267] Linking static target drivers/librte_mempool_ring.a 00:02:31.121 [199/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.121 [200/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.121 [201/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.121 [202/267] Linking static target drivers/librte_bus_pci.a 00:02:31.121 [203/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.121 [204/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:31.121 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.121 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.121 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:31.121 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.121 [209/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.121 [210/267] Linking static target lib/librte_cryptodev.a 00:02:31.121 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.382 [212/267] Linking target lib/librte_telemetry.so.24.1 00:02:31.382 [213/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:31.382 [214/267] Linking static target lib/librte_ethdev.a 00:02:31.382 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.382 [216/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.382 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.644 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:31.644 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.644 [220/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.644 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.644 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.907 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.907 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.907 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.907 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.853 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:32.853 [228/267] Linking static target lib/librte_vhost.a 00:02:33.425 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.808 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.392 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.335 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.335 [233/267] Linking target lib/librte_eal.so.24.1 00:02:42.597 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:42.597 [235/267] Linking target lib/librte_ring.so.24.1 00:02:42.598 [236/267] Linking target lib/librte_timer.so.24.1 00:02:42.598 [237/267] Linking target lib/librte_meter.so.24.1 00:02:42.598 [238/267] Linking target lib/librte_pci.so.24.1 00:02:42.598 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:42.598 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:42.598 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:42.598 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:42.598 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:42.598 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:42.598 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:42.859 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:42.859 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:42.859 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:42.859 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:42.860 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:42.860 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:42.860 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:43.120 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:43.120 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:43.120 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:43.120 [256/267] Linking target lib/librte_net.so.24.1 00:02:43.121 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:43.121 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:43.381 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:43.381 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:43.381 [261/267] Linking target lib/librte_hash.so.24.1 00:02:43.381 [262/267] Linking target lib/librte_security.so.24.1 00:02:43.381 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:43.381 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:43.381 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:43.381 [266/267] Linking target lib/librte_power.so.24.1 00:02:43.642 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:43.643 INFO: autodetecting backend as ninja 00:02:43.643 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:45.557 CC lib/ut_mock/mock.o 00:02:45.557 CC lib/log/log.o 00:02:45.557 CC lib/log/log_flags.o 00:02:45.557 CC lib/log/log_deprecated.o 00:02:45.557 CC lib/ut/ut.o 00:02:45.819 LIB libspdk_ut.a 00:02:45.819 LIB libspdk_ut_mock.a 00:02:45.819 LIB libspdk_log.a 00:02:45.819 SO libspdk_ut.so.2.0 00:02:45.819 SO libspdk_ut_mock.so.6.0 00:02:45.819 SO libspdk_log.so.7.1 00:02:45.819 SYMLINK libspdk_ut.so 00:02:45.819 SYMLINK libspdk_ut_mock.so 00:02:45.819 SYMLINK libspdk_log.so 00:02:46.081 CC lib/dma/dma.o 00:02:46.081 CXX lib/trace_parser/trace.o 00:02:46.081 CC lib/util/base64.o 00:02:46.081 CC lib/util/bit_array.o 00:02:46.081 CC lib/ioat/ioat.o 00:02:46.081 CC lib/util/cpuset.o 00:02:46.081 CC lib/util/crc16.o 00:02:46.081 CC lib/util/crc32.o 00:02:46.081 CC lib/util/crc32c.o 00:02:46.081 CC lib/util/crc32_ieee.o 00:02:46.081 CC lib/util/crc64.o 00:02:46.081 CC lib/util/dif.o 00:02:46.081 CC lib/util/fd.o 00:02:46.081 CC lib/util/fd_group.o 00:02:46.081 CC lib/util/file.o 00:02:46.081 CC lib/util/hexlify.o 00:02:46.081 CC lib/util/iov.o 00:02:46.081 CC lib/util/math.o 00:02:46.081 CC lib/util/net.o 00:02:46.081 CC lib/util/pipe.o 00:02:46.081 CC lib/util/strerror_tls.o 00:02:46.081 CC lib/util/string.o 00:02:46.081 CC lib/util/uuid.o 00:02:46.343 CC lib/util/xor.o 00:02:46.343 CC lib/util/zipf.o 00:02:46.343 CC lib/util/md5.o 00:02:46.343 CC lib/vfio_user/host/vfio_user_pci.o 00:02:46.343 CC lib/vfio_user/host/vfio_user.o 00:02:46.343 LIB libspdk_dma.a 00:02:46.343 SO libspdk_dma.so.5.0 00:02:46.605 LIB libspdk_ioat.a 00:02:46.605 SO libspdk_ioat.so.7.0 00:02:46.605 SYMLINK libspdk_dma.so 00:02:46.605 SYMLINK libspdk_ioat.so 00:02:46.605 LIB libspdk_vfio_user.a 00:02:46.605 SO libspdk_vfio_user.so.5.0 00:02:46.605 LIB libspdk_util.a 00:02:46.605 SYMLINK libspdk_vfio_user.so 00:02:46.867 SO libspdk_util.so.10.1 00:02:46.867 SYMLINK libspdk_util.so 00:02:46.867 LIB libspdk_trace_parser.a 00:02:47.129 SO libspdk_trace_parser.so.6.0 00:02:47.129 SYMLINK libspdk_trace_parser.so 00:02:47.129 CC lib/json/json_parse.o 00:02:47.129 CC lib/json/json_util.o 00:02:47.129 CC lib/json/json_write.o 00:02:47.391 CC lib/conf/conf.o 00:02:47.391 CC lib/rdma_utils/rdma_utils.o 00:02:47.391 CC lib/vmd/vmd.o 00:02:47.391 CC lib/vmd/led.o 00:02:47.391 CC lib/env_dpdk/env.o 00:02:47.391 CC lib/env_dpdk/memory.o 00:02:47.391 CC lib/idxd/idxd.o 00:02:47.391 CC lib/env_dpdk/pci.o 00:02:47.391 CC lib/idxd/idxd_user.o 00:02:47.391 CC lib/env_dpdk/init.o 00:02:47.391 CC lib/env_dpdk/threads.o 00:02:47.391 CC lib/idxd/idxd_kernel.o 00:02:47.391 CC lib/env_dpdk/pci_ioat.o 00:02:47.391 CC lib/env_dpdk/pci_virtio.o 00:02:47.391 CC lib/env_dpdk/pci_vmd.o 00:02:47.391 CC lib/env_dpdk/pci_idxd.o 00:02:47.391 CC lib/env_dpdk/pci_event.o 00:02:47.391 CC lib/env_dpdk/sigbus_handler.o 00:02:47.391 CC lib/env_dpdk/pci_dpdk.o 00:02:47.391 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:47.391 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:47.653 LIB libspdk_conf.a 00:02:47.653 LIB libspdk_rdma_utils.a 00:02:47.653 SO libspdk_conf.so.6.0 00:02:47.653 LIB libspdk_json.a 00:02:47.653 SO libspdk_rdma_utils.so.1.0 00:02:47.653 SO libspdk_json.so.6.0 00:02:47.653 SYMLINK libspdk_conf.so 00:02:47.653 SYMLINK libspdk_rdma_utils.so 00:02:47.653 SYMLINK libspdk_json.so 00:02:47.917 LIB libspdk_idxd.a 00:02:47.917 LIB libspdk_vmd.a 00:02:47.917 SO libspdk_idxd.so.12.1 00:02:47.917 SO libspdk_vmd.so.6.0 00:02:47.917 SYMLINK libspdk_idxd.so 00:02:47.917 SYMLINK libspdk_vmd.so 00:02:48.178 CC lib/rdma_provider/common.o 00:02:48.178 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:48.178 CC lib/jsonrpc/jsonrpc_server.o 00:02:48.178 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:48.178 CC lib/jsonrpc/jsonrpc_client.o 00:02:48.178 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:48.178 LIB libspdk_rdma_provider.a 00:02:48.439 SO libspdk_rdma_provider.so.7.0 00:02:48.439 LIB libspdk_jsonrpc.a 00:02:48.439 SO libspdk_jsonrpc.so.6.0 00:02:48.439 SYMLINK libspdk_rdma_provider.so 00:02:48.439 SYMLINK libspdk_jsonrpc.so 00:02:48.439 LIB libspdk_env_dpdk.a 00:02:48.700 SO libspdk_env_dpdk.so.15.1 00:02:48.700 SYMLINK libspdk_env_dpdk.so 00:02:48.961 CC lib/rpc/rpc.o 00:02:48.961 LIB libspdk_rpc.a 00:02:48.961 SO libspdk_rpc.so.6.0 00:02:49.222 SYMLINK libspdk_rpc.so 00:02:49.483 CC lib/notify/notify.o 00:02:49.483 CC lib/notify/notify_rpc.o 00:02:49.483 CC lib/trace/trace.o 00:02:49.483 CC lib/keyring/keyring.o 00:02:49.483 CC lib/trace/trace_flags.o 00:02:49.483 CC lib/keyring/keyring_rpc.o 00:02:49.483 CC lib/trace/trace_rpc.o 00:02:49.745 LIB libspdk_notify.a 00:02:49.745 SO libspdk_notify.so.6.0 00:02:49.745 LIB libspdk_trace.a 00:02:49.745 LIB libspdk_keyring.a 00:02:49.745 SO libspdk_trace.so.11.0 00:02:49.745 SO libspdk_keyring.so.2.0 00:02:49.745 SYMLINK libspdk_notify.so 00:02:49.745 SYMLINK libspdk_keyring.so 00:02:49.745 SYMLINK libspdk_trace.so 00:02:50.316 CC lib/thread/thread.o 00:02:50.316 CC lib/thread/iobuf.o 00:02:50.316 CC lib/sock/sock.o 00:02:50.316 CC lib/sock/sock_rpc.o 00:02:50.578 LIB libspdk_sock.a 00:02:50.578 SO libspdk_sock.so.10.0 00:02:50.838 SYMLINK libspdk_sock.so 00:02:51.100 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:51.100 CC lib/nvme/nvme_ctrlr.o 00:02:51.100 CC lib/nvme/nvme_fabric.o 00:02:51.100 CC lib/nvme/nvme_ns_cmd.o 00:02:51.100 CC lib/nvme/nvme_ns.o 00:02:51.100 CC lib/nvme/nvme_pcie_common.o 00:02:51.100 CC lib/nvme/nvme_pcie.o 00:02:51.100 CC lib/nvme/nvme_qpair.o 00:02:51.100 CC lib/nvme/nvme.o 00:02:51.100 CC lib/nvme/nvme_quirks.o 00:02:51.100 CC lib/nvme/nvme_transport.o 00:02:51.100 CC lib/nvme/nvme_discovery.o 00:02:51.100 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:51.100 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:51.100 CC lib/nvme/nvme_tcp.o 00:02:51.100 CC lib/nvme/nvme_opal.o 00:02:51.100 CC lib/nvme/nvme_io_msg.o 00:02:51.100 CC lib/nvme/nvme_poll_group.o 00:02:51.100 CC lib/nvme/nvme_zns.o 00:02:51.100 CC lib/nvme/nvme_stubs.o 00:02:51.100 CC lib/nvme/nvme_auth.o 00:02:51.100 CC lib/nvme/nvme_cuse.o 00:02:51.100 CC lib/nvme/nvme_vfio_user.o 00:02:51.100 CC lib/nvme/nvme_rdma.o 00:02:51.671 LIB libspdk_thread.a 00:02:51.671 SO libspdk_thread.so.11.0 00:02:51.671 SYMLINK libspdk_thread.so 00:02:51.932 CC lib/accel/accel.o 00:02:51.932 CC lib/accel/accel_rpc.o 00:02:51.932 CC lib/accel/accel_sw.o 00:02:51.932 CC lib/fsdev/fsdev.o 00:02:51.932 CC lib/fsdev/fsdev_io.o 00:02:51.932 CC lib/virtio/virtio.o 00:02:51.932 CC lib/fsdev/fsdev_rpc.o 00:02:51.932 CC lib/virtio/virtio_vhost_user.o 00:02:51.932 CC lib/virtio/virtio_vfio_user.o 00:02:51.932 CC lib/virtio/virtio_pci.o 00:02:52.193 CC lib/blob/blobstore.o 00:02:52.193 CC lib/blob/request.o 00:02:52.193 CC lib/blob/zeroes.o 00:02:52.193 CC lib/vfu_tgt/tgt_endpoint.o 00:02:52.193 CC lib/blob/blob_bs_dev.o 00:02:52.193 CC lib/vfu_tgt/tgt_rpc.o 00:02:52.193 CC lib/init/json_config.o 00:02:52.193 CC lib/init/subsystem.o 00:02:52.193 CC lib/init/subsystem_rpc.o 00:02:52.193 CC lib/init/rpc.o 00:02:52.455 LIB libspdk_init.a 00:02:52.455 SO libspdk_init.so.6.0 00:02:52.455 LIB libspdk_virtio.a 00:02:52.455 LIB libspdk_vfu_tgt.a 00:02:52.455 SYMLINK libspdk_init.so 00:02:52.455 SO libspdk_vfu_tgt.so.3.0 00:02:52.455 SO libspdk_virtio.so.7.0 00:02:52.455 SYMLINK libspdk_vfu_tgt.so 00:02:52.455 SYMLINK libspdk_virtio.so 00:02:52.716 LIB libspdk_fsdev.a 00:02:52.716 SO libspdk_fsdev.so.2.0 00:02:52.716 SYMLINK libspdk_fsdev.so 00:02:52.716 CC lib/event/app.o 00:02:52.716 CC lib/event/reactor.o 00:02:52.716 CC lib/event/log_rpc.o 00:02:52.716 CC lib/event/app_rpc.o 00:02:52.716 CC lib/event/scheduler_static.o 00:02:52.978 LIB libspdk_accel.a 00:02:52.978 SO libspdk_accel.so.16.0 00:02:52.978 LIB libspdk_nvme.a 00:02:53.240 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:53.240 SYMLINK libspdk_accel.so 00:02:53.240 SO libspdk_nvme.so.15.0 00:02:53.240 LIB libspdk_event.a 00:02:53.240 SO libspdk_event.so.14.0 00:02:53.240 SYMLINK libspdk_event.so 00:02:53.501 SYMLINK libspdk_nvme.so 00:02:53.501 CC lib/bdev/bdev.o 00:02:53.501 CC lib/bdev/bdev_rpc.o 00:02:53.501 CC lib/bdev/bdev_zone.o 00:02:53.501 CC lib/bdev/part.o 00:02:53.501 CC lib/bdev/scsi_nvme.o 00:02:53.763 LIB libspdk_fuse_dispatcher.a 00:02:53.763 SO libspdk_fuse_dispatcher.so.1.0 00:02:53.763 SYMLINK libspdk_fuse_dispatcher.so 00:02:54.707 LIB libspdk_blob.a 00:02:54.707 SO libspdk_blob.so.11.0 00:02:54.968 SYMLINK libspdk_blob.so 00:02:55.229 CC lib/blobfs/blobfs.o 00:02:55.229 CC lib/blobfs/tree.o 00:02:55.229 CC lib/lvol/lvol.o 00:02:55.799 LIB libspdk_bdev.a 00:02:55.799 SO libspdk_bdev.so.17.0 00:02:56.059 LIB libspdk_blobfs.a 00:02:56.059 SO libspdk_blobfs.so.10.0 00:02:56.059 SYMLINK libspdk_bdev.so 00:02:56.059 LIB libspdk_lvol.a 00:02:56.059 SYMLINK libspdk_blobfs.so 00:02:56.059 SO libspdk_lvol.so.10.0 00:02:56.059 SYMLINK libspdk_lvol.so 00:02:56.321 CC lib/nbd/nbd.o 00:02:56.321 CC lib/nbd/nbd_rpc.o 00:02:56.321 CC lib/nvmf/ctrlr.o 00:02:56.321 CC lib/scsi/dev.o 00:02:56.321 CC lib/nvmf/ctrlr_discovery.o 00:02:56.321 CC lib/scsi/lun.o 00:02:56.321 CC lib/nvmf/ctrlr_bdev.o 00:02:56.321 CC lib/scsi/port.o 00:02:56.321 CC lib/ublk/ublk.o 00:02:56.321 CC lib/nvmf/subsystem.o 00:02:56.321 CC lib/scsi/scsi.o 00:02:56.321 CC lib/ublk/ublk_rpc.o 00:02:56.321 CC lib/nvmf/nvmf.o 00:02:56.321 CC lib/ftl/ftl_core.o 00:02:56.321 CC lib/scsi/scsi_bdev.o 00:02:56.321 CC lib/nvmf/nvmf_rpc.o 00:02:56.321 CC lib/scsi/scsi_pr.o 00:02:56.321 CC lib/ftl/ftl_init.o 00:02:56.321 CC lib/nvmf/transport.o 00:02:56.321 CC lib/ftl/ftl_layout.o 00:02:56.321 CC lib/nvmf/tcp.o 00:02:56.321 CC lib/scsi/scsi_rpc.o 00:02:56.321 CC lib/scsi/task.o 00:02:56.321 CC lib/ftl/ftl_debug.o 00:02:56.321 CC lib/nvmf/stubs.o 00:02:56.321 CC lib/ftl/ftl_io.o 00:02:56.321 CC lib/nvmf/mdns_server.o 00:02:56.321 CC lib/nvmf/vfio_user.o 00:02:56.321 CC lib/ftl/ftl_sb.o 00:02:56.321 CC lib/nvmf/rdma.o 00:02:56.321 CC lib/ftl/ftl_l2p.o 00:02:56.321 CC lib/ftl/ftl_l2p_flat.o 00:02:56.321 CC lib/nvmf/auth.o 00:02:56.321 CC lib/ftl/ftl_nv_cache.o 00:02:56.321 CC lib/ftl/ftl_band.o 00:02:56.321 CC lib/ftl/ftl_band_ops.o 00:02:56.321 CC lib/ftl/ftl_rq.o 00:02:56.321 CC lib/ftl/ftl_writer.o 00:02:56.321 CC lib/ftl/ftl_reloc.o 00:02:56.321 CC lib/ftl/ftl_l2p_cache.o 00:02:56.321 CC lib/ftl/ftl_p2l.o 00:02:56.321 CC lib/ftl/ftl_p2l_log.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:56.321 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:56.321 CC lib/ftl/utils/ftl_conf.o 00:02:56.321 CC lib/ftl/utils/ftl_md.o 00:02:56.321 CC lib/ftl/utils/ftl_mempool.o 00:02:56.321 CC lib/ftl/utils/ftl_bitmap.o 00:02:56.582 CC lib/ftl/utils/ftl_property.o 00:02:56.582 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:56.582 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:56.582 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:56.582 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:56.582 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:56.582 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:56.582 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:56.582 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:56.582 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:56.582 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:56.582 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:56.582 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:56.582 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:56.582 CC lib/ftl/base/ftl_base_dev.o 00:02:56.582 CC lib/ftl/base/ftl_base_bdev.o 00:02:56.582 CC lib/ftl/ftl_trace.o 00:02:57.152 LIB libspdk_nbd.a 00:02:57.152 SO libspdk_nbd.so.7.0 00:02:57.152 SYMLINK libspdk_nbd.so 00:02:57.152 LIB libspdk_scsi.a 00:02:57.152 SO libspdk_scsi.so.9.0 00:02:57.413 LIB libspdk_ublk.a 00:02:57.413 SYMLINK libspdk_scsi.so 00:02:57.413 SO libspdk_ublk.so.3.0 00:02:57.413 SYMLINK libspdk_ublk.so 00:02:57.674 LIB libspdk_ftl.a 00:02:57.674 CC lib/iscsi/conn.o 00:02:57.674 CC lib/iscsi/init_grp.o 00:02:57.674 CC lib/vhost/vhost.o 00:02:57.674 CC lib/iscsi/iscsi.o 00:02:57.674 CC lib/vhost/vhost_rpc.o 00:02:57.674 CC lib/iscsi/param.o 00:02:57.674 CC lib/vhost/vhost_scsi.o 00:02:57.674 CC lib/iscsi/portal_grp.o 00:02:57.674 CC lib/vhost/vhost_blk.o 00:02:57.674 CC lib/iscsi/tgt_node.o 00:02:57.674 CC lib/vhost/rte_vhost_user.o 00:02:57.674 CC lib/iscsi/iscsi_subsystem.o 00:02:57.674 CC lib/iscsi/iscsi_rpc.o 00:02:57.674 CC lib/iscsi/task.o 00:02:57.934 SO libspdk_ftl.so.9.0 00:02:58.194 SYMLINK libspdk_ftl.so 00:02:58.455 LIB libspdk_nvmf.a 00:02:58.716 SO libspdk_nvmf.so.20.0 00:02:58.716 LIB libspdk_vhost.a 00:02:58.716 SO libspdk_vhost.so.8.0 00:02:58.716 SYMLINK libspdk_nvmf.so 00:02:58.716 SYMLINK libspdk_vhost.so 00:02:58.976 LIB libspdk_iscsi.a 00:02:58.976 SO libspdk_iscsi.so.8.0 00:02:59.237 SYMLINK libspdk_iscsi.so 00:02:59.810 CC module/env_dpdk/env_dpdk_rpc.o 00:02:59.810 CC module/vfu_device/vfu_virtio.o 00:02:59.810 CC module/vfu_device/vfu_virtio_blk.o 00:02:59.810 CC module/vfu_device/vfu_virtio_rpc.o 00:02:59.810 CC module/vfu_device/vfu_virtio_scsi.o 00:02:59.810 CC module/vfu_device/vfu_virtio_fs.o 00:02:59.810 CC module/keyring/file/keyring.o 00:02:59.810 CC module/keyring/file/keyring_rpc.o 00:02:59.810 CC module/fsdev/aio/fsdev_aio.o 00:02:59.810 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:59.810 LIB libspdk_env_dpdk_rpc.a 00:02:59.810 CC module/fsdev/aio/linux_aio_mgr.o 00:02:59.810 CC module/scheduler/gscheduler/gscheduler.o 00:02:59.810 CC module/accel/error/accel_error.o 00:02:59.810 CC module/accel/error/accel_error_rpc.o 00:02:59.810 CC module/blob/bdev/blob_bdev.o 00:02:59.810 CC module/accel/ioat/accel_ioat.o 00:02:59.810 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:59.810 CC module/accel/ioat/accel_ioat_rpc.o 00:02:59.810 CC module/accel/dsa/accel_dsa.o 00:02:59.810 CC module/sock/posix/posix.o 00:02:59.810 CC module/accel/dsa/accel_dsa_rpc.o 00:02:59.810 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:59.810 CC module/accel/iaa/accel_iaa.o 00:02:59.810 CC module/accel/iaa/accel_iaa_rpc.o 00:02:59.810 CC module/keyring/linux/keyring.o 00:02:59.810 CC module/keyring/linux/keyring_rpc.o 00:02:59.810 SO libspdk_env_dpdk_rpc.so.6.0 00:03:00.071 SYMLINK libspdk_env_dpdk_rpc.so 00:03:00.071 LIB libspdk_scheduler_gscheduler.a 00:03:00.071 LIB libspdk_keyring_file.a 00:03:00.071 LIB libspdk_keyring_linux.a 00:03:00.071 SO libspdk_scheduler_gscheduler.so.4.0 00:03:00.071 LIB libspdk_scheduler_dpdk_governor.a 00:03:00.071 SO libspdk_keyring_file.so.2.0 00:03:00.071 LIB libspdk_accel_ioat.a 00:03:00.071 SO libspdk_keyring_linux.so.1.0 00:03:00.071 LIB libspdk_accel_error.a 00:03:00.071 LIB libspdk_scheduler_dynamic.a 00:03:00.071 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:00.071 LIB libspdk_accel_iaa.a 00:03:00.071 SO libspdk_accel_ioat.so.6.0 00:03:00.071 SYMLINK libspdk_scheduler_gscheduler.so 00:03:00.071 SO libspdk_scheduler_dynamic.so.4.0 00:03:00.071 SO libspdk_accel_error.so.2.0 00:03:00.071 SYMLINK libspdk_keyring_file.so 00:03:00.332 LIB libspdk_blob_bdev.a 00:03:00.332 SO libspdk_accel_iaa.so.3.0 00:03:00.332 SYMLINK libspdk_keyring_linux.so 00:03:00.332 LIB libspdk_accel_dsa.a 00:03:00.332 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:00.332 SO libspdk_blob_bdev.so.11.0 00:03:00.332 SYMLINK libspdk_accel_ioat.so 00:03:00.332 SYMLINK libspdk_scheduler_dynamic.so 00:03:00.332 SO libspdk_accel_dsa.so.5.0 00:03:00.332 SYMLINK libspdk_accel_error.so 00:03:00.332 SYMLINK libspdk_accel_iaa.so 00:03:00.332 LIB libspdk_vfu_device.a 00:03:00.332 SYMLINK libspdk_blob_bdev.so 00:03:00.332 SYMLINK libspdk_accel_dsa.so 00:03:00.332 SO libspdk_vfu_device.so.3.0 00:03:00.332 SYMLINK libspdk_vfu_device.so 00:03:00.593 LIB libspdk_fsdev_aio.a 00:03:00.593 SO libspdk_fsdev_aio.so.1.0 00:03:00.593 LIB libspdk_sock_posix.a 00:03:00.593 SYMLINK libspdk_fsdev_aio.so 00:03:00.593 SO libspdk_sock_posix.so.6.0 00:03:00.855 SYMLINK libspdk_sock_posix.so 00:03:00.855 CC module/bdev/malloc/bdev_malloc.o 00:03:00.855 CC module/bdev/gpt/gpt.o 00:03:00.855 CC module/bdev/gpt/vbdev_gpt.o 00:03:00.855 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:00.855 CC module/blobfs/bdev/blobfs_bdev.o 00:03:00.855 CC module/bdev/null/bdev_null.o 00:03:00.855 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:00.855 CC module/bdev/null/bdev_null_rpc.o 00:03:00.855 CC module/bdev/delay/vbdev_delay.o 00:03:00.855 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:00.855 CC module/bdev/lvol/vbdev_lvol.o 00:03:00.855 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:00.855 CC module/bdev/iscsi/bdev_iscsi.o 00:03:00.855 CC module/bdev/ftl/bdev_ftl.o 00:03:00.855 CC module/bdev/nvme/bdev_nvme.o 00:03:00.855 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:00.855 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:00.855 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:00.855 CC module/bdev/error/vbdev_error.o 00:03:00.855 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:00.855 CC module/bdev/error/vbdev_error_rpc.o 00:03:00.855 CC module/bdev/passthru/vbdev_passthru.o 00:03:00.855 CC module/bdev/raid/bdev_raid.o 00:03:00.855 CC module/bdev/nvme/nvme_rpc.o 00:03:00.855 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:00.855 CC module/bdev/nvme/bdev_mdns_client.o 00:03:00.855 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:00.855 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.855 CC module/bdev/split/vbdev_split.o 00:03:00.855 CC module/bdev/nvme/vbdev_opal.o 00:03:00.855 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.855 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:00.855 CC module/bdev/raid/raid0.o 00:03:00.855 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.855 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:00.855 CC module/bdev/aio/bdev_aio.o 00:03:00.855 CC module/bdev/raid/raid1.o 00:03:00.855 CC module/bdev/raid/concat.o 00:03:00.855 CC module/bdev/aio/bdev_aio_rpc.o 00:03:00.855 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:00.855 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:00.855 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:01.115 LIB libspdk_blobfs_bdev.a 00:03:01.115 SO libspdk_blobfs_bdev.so.6.0 00:03:01.115 LIB libspdk_bdev_gpt.a 00:03:01.376 LIB libspdk_bdev_null.a 00:03:01.376 LIB libspdk_bdev_split.a 00:03:01.376 LIB libspdk_bdev_ftl.a 00:03:01.376 SO libspdk_bdev_gpt.so.6.0 00:03:01.376 SYMLINK libspdk_blobfs_bdev.so 00:03:01.376 SO libspdk_bdev_null.so.6.0 00:03:01.376 LIB libspdk_bdev_error.a 00:03:01.376 LIB libspdk_bdev_passthru.a 00:03:01.376 SO libspdk_bdev_split.so.6.0 00:03:01.376 SO libspdk_bdev_ftl.so.6.0 00:03:01.376 LIB libspdk_bdev_delay.a 00:03:01.376 SO libspdk_bdev_error.so.6.0 00:03:01.376 LIB libspdk_bdev_malloc.a 00:03:01.376 SO libspdk_bdev_passthru.so.6.0 00:03:01.376 LIB libspdk_bdev_aio.a 00:03:01.376 SYMLINK libspdk_bdev_gpt.so 00:03:01.376 LIB libspdk_bdev_zone_block.a 00:03:01.376 SO libspdk_bdev_malloc.so.6.0 00:03:01.376 SYMLINK libspdk_bdev_null.so 00:03:01.376 SO libspdk_bdev_delay.so.6.0 00:03:01.376 LIB libspdk_bdev_iscsi.a 00:03:01.376 SYMLINK libspdk_bdev_split.so 00:03:01.376 SO libspdk_bdev_zone_block.so.6.0 00:03:01.376 SYMLINK libspdk_bdev_ftl.so 00:03:01.376 SO libspdk_bdev_aio.so.6.0 00:03:01.376 SYMLINK libspdk_bdev_error.so 00:03:01.376 SO libspdk_bdev_iscsi.so.6.0 00:03:01.376 SYMLINK libspdk_bdev_passthru.so 00:03:01.376 SYMLINK libspdk_bdev_delay.so 00:03:01.376 SYMLINK libspdk_bdev_malloc.so 00:03:01.376 SYMLINK libspdk_bdev_aio.so 00:03:01.376 SYMLINK libspdk_bdev_zone_block.so 00:03:01.376 LIB libspdk_bdev_lvol.a 00:03:01.376 SYMLINK libspdk_bdev_iscsi.so 00:03:01.637 LIB libspdk_bdev_virtio.a 00:03:01.637 SO libspdk_bdev_lvol.so.6.0 00:03:01.637 SO libspdk_bdev_virtio.so.6.0 00:03:01.637 SYMLINK libspdk_bdev_lvol.so 00:03:01.637 SYMLINK libspdk_bdev_virtio.so 00:03:01.899 LIB libspdk_bdev_raid.a 00:03:01.899 SO libspdk_bdev_raid.so.6.0 00:03:02.160 SYMLINK libspdk_bdev_raid.so 00:03:03.546 LIB libspdk_bdev_nvme.a 00:03:03.546 SO libspdk_bdev_nvme.so.7.1 00:03:03.546 SYMLINK libspdk_bdev_nvme.so 00:03:04.116 CC module/event/subsystems/iobuf/iobuf.o 00:03:04.116 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:04.116 CC module/event/subsystems/sock/sock.o 00:03:04.116 CC module/event/subsystems/vmd/vmd.o 00:03:04.116 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:04.116 CC module/event/subsystems/keyring/keyring.o 00:03:04.116 CC module/event/subsystems/scheduler/scheduler.o 00:03:04.116 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:04.116 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:04.116 CC module/event/subsystems/fsdev/fsdev.o 00:03:04.377 LIB libspdk_event_keyring.a 00:03:04.377 LIB libspdk_event_fsdev.a 00:03:04.377 LIB libspdk_event_sock.a 00:03:04.377 LIB libspdk_event_iobuf.a 00:03:04.377 LIB libspdk_event_vhost_blk.a 00:03:04.377 LIB libspdk_event_vmd.a 00:03:04.377 LIB libspdk_event_scheduler.a 00:03:04.377 LIB libspdk_event_vfu_tgt.a 00:03:04.377 SO libspdk_event_keyring.so.1.0 00:03:04.377 SO libspdk_event_iobuf.so.3.0 00:03:04.377 SO libspdk_event_fsdev.so.1.0 00:03:04.377 SO libspdk_event_sock.so.5.0 00:03:04.377 SO libspdk_event_vhost_blk.so.3.0 00:03:04.377 SO libspdk_event_scheduler.so.4.0 00:03:04.377 SO libspdk_event_vfu_tgt.so.3.0 00:03:04.377 SO libspdk_event_vmd.so.6.0 00:03:04.377 SYMLINK libspdk_event_keyring.so 00:03:04.377 SYMLINK libspdk_event_fsdev.so 00:03:04.377 SYMLINK libspdk_event_sock.so 00:03:04.377 SYMLINK libspdk_event_iobuf.so 00:03:04.377 SYMLINK libspdk_event_scheduler.so 00:03:04.377 SYMLINK libspdk_event_vhost_blk.so 00:03:04.377 SYMLINK libspdk_event_vfu_tgt.so 00:03:04.377 SYMLINK libspdk_event_vmd.so 00:03:04.949 CC module/event/subsystems/accel/accel.o 00:03:04.949 LIB libspdk_event_accel.a 00:03:04.949 SO libspdk_event_accel.so.6.0 00:03:04.949 SYMLINK libspdk_event_accel.so 00:03:05.521 CC module/event/subsystems/bdev/bdev.o 00:03:05.521 LIB libspdk_event_bdev.a 00:03:05.521 SO libspdk_event_bdev.so.6.0 00:03:05.782 SYMLINK libspdk_event_bdev.so 00:03:06.043 CC module/event/subsystems/scsi/scsi.o 00:03:06.043 CC module/event/subsystems/nbd/nbd.o 00:03:06.043 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:06.043 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:06.043 CC module/event/subsystems/ublk/ublk.o 00:03:06.304 LIB libspdk_event_ublk.a 00:03:06.304 LIB libspdk_event_nbd.a 00:03:06.304 LIB libspdk_event_scsi.a 00:03:06.304 SO libspdk_event_ublk.so.3.0 00:03:06.304 SO libspdk_event_nbd.so.6.0 00:03:06.304 SO libspdk_event_scsi.so.6.0 00:03:06.304 LIB libspdk_event_nvmf.a 00:03:06.304 SYMLINK libspdk_event_ublk.so 00:03:06.304 SYMLINK libspdk_event_nbd.so 00:03:06.304 SO libspdk_event_nvmf.so.6.0 00:03:06.304 SYMLINK libspdk_event_scsi.so 00:03:06.304 SYMLINK libspdk_event_nvmf.so 00:03:06.876 CC module/event/subsystems/iscsi/iscsi.o 00:03:06.876 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:06.876 LIB libspdk_event_vhost_scsi.a 00:03:06.876 LIB libspdk_event_iscsi.a 00:03:06.876 SO libspdk_event_vhost_scsi.so.3.0 00:03:06.876 SO libspdk_event_iscsi.so.6.0 00:03:07.137 SYMLINK libspdk_event_vhost_scsi.so 00:03:07.137 SYMLINK libspdk_event_iscsi.so 00:03:07.137 SO libspdk.so.6.0 00:03:07.137 SYMLINK libspdk.so 00:03:07.713 CXX app/trace/trace.o 00:03:07.713 CC app/trace_record/trace_record.o 00:03:07.713 CC test/rpc_client/rpc_client_test.o 00:03:07.713 CC app/spdk_nvme_identify/identify.o 00:03:07.713 CC app/spdk_lspci/spdk_lspci.o 00:03:07.713 TEST_HEADER include/spdk/accel.h 00:03:07.713 TEST_HEADER include/spdk/accel_module.h 00:03:07.713 TEST_HEADER include/spdk/assert.h 00:03:07.713 CC app/spdk_nvme_perf/perf.o 00:03:07.713 TEST_HEADER include/spdk/barrier.h 00:03:07.713 TEST_HEADER include/spdk/base64.h 00:03:07.713 CC app/spdk_top/spdk_top.o 00:03:07.713 TEST_HEADER include/spdk/bdev.h 00:03:07.714 TEST_HEADER include/spdk/bdev_module.h 00:03:07.714 TEST_HEADER include/spdk/bdev_zone.h 00:03:07.714 TEST_HEADER include/spdk/bit_array.h 00:03:07.714 TEST_HEADER include/spdk/bit_pool.h 00:03:07.714 CC app/spdk_nvme_discover/discovery_aer.o 00:03:07.714 TEST_HEADER include/spdk/blob_bdev.h 00:03:07.714 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:07.714 TEST_HEADER include/spdk/blobfs.h 00:03:07.714 TEST_HEADER include/spdk/blob.h 00:03:07.714 TEST_HEADER include/spdk/conf.h 00:03:07.714 TEST_HEADER include/spdk/config.h 00:03:07.714 TEST_HEADER include/spdk/cpuset.h 00:03:07.714 TEST_HEADER include/spdk/crc16.h 00:03:07.714 TEST_HEADER include/spdk/crc32.h 00:03:07.714 TEST_HEADER include/spdk/crc64.h 00:03:07.714 TEST_HEADER include/spdk/dma.h 00:03:07.714 TEST_HEADER include/spdk/dif.h 00:03:07.714 TEST_HEADER include/spdk/endian.h 00:03:07.714 TEST_HEADER include/spdk/env_dpdk.h 00:03:07.714 TEST_HEADER include/spdk/env.h 00:03:07.714 TEST_HEADER include/spdk/event.h 00:03:07.714 TEST_HEADER include/spdk/fd_group.h 00:03:07.714 TEST_HEADER include/spdk/fd.h 00:03:07.714 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:07.714 TEST_HEADER include/spdk/file.h 00:03:07.714 TEST_HEADER include/spdk/fsdev.h 00:03:07.714 TEST_HEADER include/spdk/fsdev_module.h 00:03:07.714 TEST_HEADER include/spdk/ftl.h 00:03:07.714 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:07.714 CC app/iscsi_tgt/iscsi_tgt.o 00:03:07.714 TEST_HEADER include/spdk/gpt_spec.h 00:03:07.714 TEST_HEADER include/spdk/histogram_data.h 00:03:07.714 TEST_HEADER include/spdk/hexlify.h 00:03:07.714 TEST_HEADER include/spdk/init.h 00:03:07.714 TEST_HEADER include/spdk/idxd_spec.h 00:03:07.714 TEST_HEADER include/spdk/idxd.h 00:03:07.714 TEST_HEADER include/spdk/ioat.h 00:03:07.714 CC app/nvmf_tgt/nvmf_main.o 00:03:07.714 TEST_HEADER include/spdk/ioat_spec.h 00:03:07.714 TEST_HEADER include/spdk/iscsi_spec.h 00:03:07.714 TEST_HEADER include/spdk/json.h 00:03:07.714 TEST_HEADER include/spdk/jsonrpc.h 00:03:07.714 TEST_HEADER include/spdk/keyring.h 00:03:07.714 CC app/spdk_dd/spdk_dd.o 00:03:07.714 TEST_HEADER include/spdk/keyring_module.h 00:03:07.714 TEST_HEADER include/spdk/likely.h 00:03:07.714 TEST_HEADER include/spdk/log.h 00:03:07.714 TEST_HEADER include/spdk/lvol.h 00:03:07.714 TEST_HEADER include/spdk/md5.h 00:03:07.714 TEST_HEADER include/spdk/memory.h 00:03:07.714 TEST_HEADER include/spdk/mmio.h 00:03:07.714 TEST_HEADER include/spdk/nbd.h 00:03:07.714 TEST_HEADER include/spdk/net.h 00:03:07.714 TEST_HEADER include/spdk/notify.h 00:03:07.714 TEST_HEADER include/spdk/nvme.h 00:03:07.714 TEST_HEADER include/spdk/nvme_intel.h 00:03:07.714 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:07.714 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:07.714 CC app/spdk_tgt/spdk_tgt.o 00:03:07.714 TEST_HEADER include/spdk/nvme_spec.h 00:03:07.714 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:07.714 TEST_HEADER include/spdk/nvme_zns.h 00:03:07.714 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:07.714 TEST_HEADER include/spdk/nvmf.h 00:03:07.714 TEST_HEADER include/spdk/nvmf_spec.h 00:03:07.714 TEST_HEADER include/spdk/nvmf_transport.h 00:03:07.714 TEST_HEADER include/spdk/opal_spec.h 00:03:07.714 TEST_HEADER include/spdk/opal.h 00:03:07.714 TEST_HEADER include/spdk/pci_ids.h 00:03:07.714 TEST_HEADER include/spdk/pipe.h 00:03:07.714 TEST_HEADER include/spdk/queue.h 00:03:07.714 TEST_HEADER include/spdk/reduce.h 00:03:07.714 TEST_HEADER include/spdk/rpc.h 00:03:07.714 TEST_HEADER include/spdk/scheduler.h 00:03:07.714 TEST_HEADER include/spdk/scsi.h 00:03:07.714 TEST_HEADER include/spdk/scsi_spec.h 00:03:07.714 TEST_HEADER include/spdk/stdinc.h 00:03:07.714 TEST_HEADER include/spdk/sock.h 00:03:07.714 TEST_HEADER include/spdk/string.h 00:03:07.714 TEST_HEADER include/spdk/thread.h 00:03:07.714 TEST_HEADER include/spdk/trace.h 00:03:07.714 TEST_HEADER include/spdk/trace_parser.h 00:03:07.714 TEST_HEADER include/spdk/tree.h 00:03:07.714 TEST_HEADER include/spdk/ublk.h 00:03:07.714 TEST_HEADER include/spdk/util.h 00:03:07.714 TEST_HEADER include/spdk/version.h 00:03:07.714 TEST_HEADER include/spdk/uuid.h 00:03:07.714 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:07.714 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:07.714 TEST_HEADER include/spdk/vhost.h 00:03:07.714 TEST_HEADER include/spdk/vmd.h 00:03:07.714 TEST_HEADER include/spdk/xor.h 00:03:07.714 CXX test/cpp_headers/accel.o 00:03:07.714 TEST_HEADER include/spdk/zipf.h 00:03:07.714 CXX test/cpp_headers/accel_module.o 00:03:07.714 CXX test/cpp_headers/assert.o 00:03:07.714 CXX test/cpp_headers/barrier.o 00:03:07.714 CXX test/cpp_headers/base64.o 00:03:07.714 CXX test/cpp_headers/bdev_module.o 00:03:07.714 CXX test/cpp_headers/bdev.o 00:03:07.714 CXX test/cpp_headers/bdev_zone.o 00:03:07.714 CXX test/cpp_headers/bit_array.o 00:03:07.714 CXX test/cpp_headers/bit_pool.o 00:03:07.714 CXX test/cpp_headers/blobfs_bdev.o 00:03:07.714 CXX test/cpp_headers/blob_bdev.o 00:03:07.714 CXX test/cpp_headers/blobfs.o 00:03:07.714 CXX test/cpp_headers/blob.o 00:03:07.714 CXX test/cpp_headers/conf.o 00:03:07.714 CXX test/cpp_headers/crc16.o 00:03:07.714 CXX test/cpp_headers/config.o 00:03:07.714 CXX test/cpp_headers/cpuset.o 00:03:07.714 CXX test/cpp_headers/crc32.o 00:03:07.714 CXX test/cpp_headers/crc64.o 00:03:07.714 CXX test/cpp_headers/dma.o 00:03:07.714 CXX test/cpp_headers/dif.o 00:03:07.714 CXX test/cpp_headers/endian.o 00:03:07.714 CXX test/cpp_headers/env_dpdk.o 00:03:07.714 CXX test/cpp_headers/event.o 00:03:07.714 CXX test/cpp_headers/env.o 00:03:07.714 CXX test/cpp_headers/fd.o 00:03:07.714 CXX test/cpp_headers/file.o 00:03:07.714 CXX test/cpp_headers/fd_group.o 00:03:07.714 CXX test/cpp_headers/fsdev.o 00:03:07.714 CXX test/cpp_headers/ftl.o 00:03:07.714 CXX test/cpp_headers/fsdev_module.o 00:03:07.714 CXX test/cpp_headers/hexlify.o 00:03:07.714 CXX test/cpp_headers/gpt_spec.o 00:03:07.714 CXX test/cpp_headers/fuse_dispatcher.o 00:03:07.714 CXX test/cpp_headers/histogram_data.o 00:03:07.714 CXX test/cpp_headers/idxd.o 00:03:07.714 CXX test/cpp_headers/idxd_spec.o 00:03:07.714 CXX test/cpp_headers/ioat.o 00:03:07.714 CXX test/cpp_headers/init.o 00:03:07.714 CXX test/cpp_headers/ioat_spec.o 00:03:07.714 CXX test/cpp_headers/iscsi_spec.o 00:03:07.714 CXX test/cpp_headers/json.o 00:03:07.714 CXX test/cpp_headers/jsonrpc.o 00:03:07.714 CXX test/cpp_headers/keyring_module.o 00:03:07.714 CXX test/cpp_headers/keyring.o 00:03:07.714 CXX test/cpp_headers/likely.o 00:03:07.714 CXX test/cpp_headers/lvol.o 00:03:07.714 CXX test/cpp_headers/log.o 00:03:07.714 CXX test/cpp_headers/md5.o 00:03:07.714 CXX test/cpp_headers/memory.o 00:03:07.975 CXX test/cpp_headers/mmio.o 00:03:07.975 CXX test/cpp_headers/net.o 00:03:07.975 CXX test/cpp_headers/nbd.o 00:03:07.975 CXX test/cpp_headers/nvme.o 00:03:07.975 CXX test/cpp_headers/notify.o 00:03:07.975 CXX test/cpp_headers/nvme_intel.o 00:03:07.975 CXX test/cpp_headers/nvme_spec.o 00:03:07.975 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.975 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:07.975 CXX test/cpp_headers/nvmf_cmd.o 00:03:07.975 CXX test/cpp_headers/nvme_zns.o 00:03:07.975 CC examples/util/zipf/zipf.o 00:03:07.975 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:07.975 CXX test/cpp_headers/nvmf.o 00:03:07.975 CXX test/cpp_headers/nvmf_spec.o 00:03:07.975 CXX test/cpp_headers/nvmf_transport.o 00:03:07.975 CXX test/cpp_headers/opal.o 00:03:07.975 CXX test/cpp_headers/pci_ids.o 00:03:07.975 CXX test/cpp_headers/opal_spec.o 00:03:07.975 CC examples/ioat/verify/verify.o 00:03:07.975 CXX test/cpp_headers/reduce.o 00:03:07.975 CXX test/cpp_headers/pipe.o 00:03:07.975 CXX test/cpp_headers/rpc.o 00:03:07.975 CXX test/cpp_headers/queue.o 00:03:07.975 CXX test/cpp_headers/scheduler.o 00:03:07.975 CXX test/cpp_headers/scsi.o 00:03:07.975 CXX test/cpp_headers/scsi_spec.o 00:03:07.975 CXX test/cpp_headers/thread.o 00:03:07.975 LINK spdk_lspci 00:03:07.975 CXX test/cpp_headers/stdinc.o 00:03:07.975 CXX test/cpp_headers/sock.o 00:03:07.975 CC test/env/memory/memory_ut.o 00:03:07.975 CC examples/ioat/perf/perf.o 00:03:07.975 CXX test/cpp_headers/string.o 00:03:07.975 CXX test/cpp_headers/trace_parser.o 00:03:07.975 CXX test/cpp_headers/trace.o 00:03:07.975 CXX test/cpp_headers/tree.o 00:03:07.975 CC test/env/vtophys/vtophys.o 00:03:07.975 CXX test/cpp_headers/ublk.o 00:03:07.975 CXX test/cpp_headers/util.o 00:03:07.975 CXX test/cpp_headers/uuid.o 00:03:07.975 CXX test/cpp_headers/version.o 00:03:07.975 CXX test/cpp_headers/vfio_user_spec.o 00:03:07.975 CXX test/cpp_headers/vfio_user_pci.o 00:03:07.975 CXX test/cpp_headers/vhost.o 00:03:07.975 CC test/app/jsoncat/jsoncat.o 00:03:07.975 CXX test/cpp_headers/vmd.o 00:03:07.975 CXX test/cpp_headers/zipf.o 00:03:07.975 CXX test/cpp_headers/xor.o 00:03:07.975 CC test/app/histogram_perf/histogram_perf.o 00:03:07.975 CC test/env/pci/pci_ut.o 00:03:07.975 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:07.975 CC test/app/stub/stub.o 00:03:07.975 CC app/fio/nvme/fio_plugin.o 00:03:07.975 CC test/thread/poller_perf/poller_perf.o 00:03:07.975 CC test/dma/test_dma/test_dma.o 00:03:07.975 LINK rpc_client_test 00:03:07.975 CC test/app/bdev_svc/bdev_svc.o 00:03:08.242 CC app/fio/bdev/fio_plugin.o 00:03:08.242 LINK interrupt_tgt 00:03:08.242 LINK spdk_nvme_discover 00:03:08.242 LINK spdk_trace_record 00:03:08.242 LINK iscsi_tgt 00:03:08.504 LINK nvmf_tgt 00:03:08.504 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:08.504 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:08.504 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:08.504 LINK spdk_dd 00:03:08.504 LINK spdk_tgt 00:03:08.504 LINK vtophys 00:03:08.504 LINK jsoncat 00:03:08.504 CC test/env/mem_callbacks/mem_callbacks.o 00:03:08.504 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:08.764 LINK verify 00:03:08.765 LINK spdk_trace 00:03:09.024 LINK zipf 00:03:09.024 LINK histogram_perf 00:03:09.025 LINK poller_perf 00:03:09.025 LINK env_dpdk_post_init 00:03:09.025 LINK ioat_perf 00:03:09.025 LINK stub 00:03:09.025 LINK bdev_svc 00:03:09.286 LINK spdk_nvme_perf 00:03:09.286 LINK spdk_bdev 00:03:09.286 CC app/vhost/vhost.o 00:03:09.286 LINK pci_ut 00:03:09.286 LINK vhost_fuzz 00:03:09.286 LINK nvme_fuzz 00:03:09.286 LINK spdk_nvme 00:03:09.547 LINK test_dma 00:03:09.547 LINK mem_callbacks 00:03:09.547 LINK spdk_top 00:03:09.547 CC examples/sock/hello_world/hello_sock.o 00:03:09.547 CC examples/idxd/perf/perf.o 00:03:09.547 LINK spdk_nvme_identify 00:03:09.547 CC examples/vmd/lsvmd/lsvmd.o 00:03:09.547 CC examples/vmd/led/led.o 00:03:09.547 LINK vhost 00:03:09.547 CC test/event/reactor_perf/reactor_perf.o 00:03:09.547 CC test/event/reactor/reactor.o 00:03:09.547 CC test/event/event_perf/event_perf.o 00:03:09.547 CC examples/thread/thread/thread_ex.o 00:03:09.547 CC test/event/app_repeat/app_repeat.o 00:03:09.547 CC test/event/scheduler/scheduler.o 00:03:09.808 LINK led 00:03:09.808 LINK lsvmd 00:03:09.808 LINK reactor_perf 00:03:09.808 LINK reactor 00:03:09.808 LINK event_perf 00:03:09.808 LINK hello_sock 00:03:09.808 LINK app_repeat 00:03:09.808 LINK thread 00:03:09.808 LINK scheduler 00:03:09.808 LINK idxd_perf 00:03:10.071 LINK memory_ut 00:03:10.071 CC test/nvme/e2edp/nvme_dp.o 00:03:10.071 CC test/nvme/sgl/sgl.o 00:03:10.071 CC test/nvme/reset/reset.o 00:03:10.071 CC test/nvme/overhead/overhead.o 00:03:10.071 CC test/nvme/compliance/nvme_compliance.o 00:03:10.071 CC test/nvme/aer/aer.o 00:03:10.071 CC test/nvme/fused_ordering/fused_ordering.o 00:03:10.071 CC test/nvme/startup/startup.o 00:03:10.071 CC test/nvme/reserve/reserve.o 00:03:10.071 CC test/nvme/simple_copy/simple_copy.o 00:03:10.071 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:10.071 CC test/nvme/err_injection/err_injection.o 00:03:10.071 CC test/nvme/cuse/cuse.o 00:03:10.071 CC test/nvme/boot_partition/boot_partition.o 00:03:10.071 CC test/nvme/fdp/fdp.o 00:03:10.071 CC test/nvme/connect_stress/connect_stress.o 00:03:10.071 CC test/accel/dif/dif.o 00:03:10.071 CC test/blobfs/mkfs/mkfs.o 00:03:10.071 CC test/lvol/esnap/esnap.o 00:03:10.334 LINK startup 00:03:10.334 LINK doorbell_aers 00:03:10.334 LINK boot_partition 00:03:10.334 LINK fused_ordering 00:03:10.334 LINK connect_stress 00:03:10.334 LINK sgl 00:03:10.334 LINK err_injection 00:03:10.334 LINK reserve 00:03:10.334 LINK simple_copy 00:03:10.334 LINK mkfs 00:03:10.334 LINK nvme_dp 00:03:10.334 CC examples/nvme/arbitration/arbitration.o 00:03:10.334 LINK reset 00:03:10.334 CC examples/nvme/hello_world/hello_world.o 00:03:10.334 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:10.334 CC examples/nvme/reconnect/reconnect.o 00:03:10.334 LINK iscsi_fuzz 00:03:10.334 LINK aer 00:03:10.334 LINK overhead 00:03:10.334 CC examples/nvme/abort/abort.o 00:03:10.334 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:10.334 CC examples/nvme/hotplug/hotplug.o 00:03:10.334 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:10.334 LINK nvme_compliance 00:03:10.334 LINK fdp 00:03:10.597 CC examples/accel/perf/accel_perf.o 00:03:10.597 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:10.597 CC examples/blob/hello_world/hello_blob.o 00:03:10.597 CC examples/blob/cli/blobcli.o 00:03:10.597 LINK pmr_persistence 00:03:10.597 LINK cmb_copy 00:03:10.597 LINK hello_world 00:03:10.597 LINK hotplug 00:03:10.597 LINK arbitration 00:03:10.597 LINK reconnect 00:03:10.858 LINK dif 00:03:10.858 LINK abort 00:03:10.858 LINK hello_fsdev 00:03:10.858 LINK hello_blob 00:03:10.858 LINK nvme_manage 00:03:11.120 LINK accel_perf 00:03:11.120 LINK blobcli 00:03:11.381 LINK cuse 00:03:11.381 CC test/bdev/bdevio/bdevio.o 00:03:11.642 CC examples/bdev/hello_world/hello_bdev.o 00:03:11.642 CC examples/bdev/bdevperf/bdevperf.o 00:03:11.642 LINK bdevio 00:03:11.903 LINK hello_bdev 00:03:12.475 LINK bdevperf 00:03:13.046 CC examples/nvmf/nvmf/nvmf.o 00:03:13.308 LINK nvmf 00:03:14.693 LINK esnap 00:03:14.954 00:03:14.954 real 0m54.734s 00:03:14.954 user 8m6.347s 00:03:14.954 sys 5m36.990s 00:03:14.954 18:02:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:14.954 18:02:16 make -- common/autotest_common.sh@10 -- $ set +x 00:03:14.954 ************************************ 00:03:14.954 END TEST make 00:03:14.954 ************************************ 00:03:15.215 18:02:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:15.215 18:02:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:15.215 18:02:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:15.215 18:02:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.215 18:02:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:15.215 18:02:16 -- pm/common@44 -- $ pid=1660754 00:03:15.215 18:02:16 -- pm/common@50 -- $ kill -TERM 1660754 00:03:15.215 18:02:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.215 18:02:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:15.215 18:02:16 -- pm/common@44 -- $ pid=1660755 00:03:15.215 18:02:16 -- pm/common@50 -- $ kill -TERM 1660755 00:03:15.215 18:02:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.215 18:02:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:15.215 18:02:16 -- pm/common@44 -- $ pid=1660757 00:03:15.215 18:02:16 -- pm/common@50 -- $ kill -TERM 1660757 00:03:15.215 18:02:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.215 18:02:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:15.215 18:02:16 -- pm/common@44 -- $ pid=1660780 00:03:15.215 18:02:16 -- pm/common@50 -- $ sudo -E kill -TERM 1660780 00:03:15.215 18:02:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:15.215 18:02:16 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:15.215 18:02:16 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:15.215 18:02:16 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:15.215 18:02:16 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:15.215 18:02:16 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:15.215 18:02:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.215 18:02:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.215 18:02:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.215 18:02:16 -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.215 18:02:16 -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.215 18:02:16 -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.215 18:02:16 -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.215 18:02:16 -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.215 18:02:16 -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.215 18:02:16 -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.215 18:02:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.215 18:02:16 -- scripts/common.sh@344 -- # case "$op" in 00:03:15.215 18:02:16 -- scripts/common.sh@345 -- # : 1 00:03:15.215 18:02:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.215 18:02:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.215 18:02:16 -- scripts/common.sh@365 -- # decimal 1 00:03:15.215 18:02:16 -- scripts/common.sh@353 -- # local d=1 00:03:15.215 18:02:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.215 18:02:16 -- scripts/common.sh@355 -- # echo 1 00:03:15.215 18:02:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.215 18:02:16 -- scripts/common.sh@366 -- # decimal 2 00:03:15.215 18:02:16 -- scripts/common.sh@353 -- # local d=2 00:03:15.216 18:02:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.216 18:02:16 -- scripts/common.sh@355 -- # echo 2 00:03:15.216 18:02:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.216 18:02:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.216 18:02:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.216 18:02:16 -- scripts/common.sh@368 -- # return 0 00:03:15.216 18:02:16 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.216 18:02:16 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:15.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.216 --rc genhtml_branch_coverage=1 00:03:15.216 --rc genhtml_function_coverage=1 00:03:15.216 --rc genhtml_legend=1 00:03:15.216 --rc geninfo_all_blocks=1 00:03:15.216 --rc geninfo_unexecuted_blocks=1 00:03:15.216 00:03:15.216 ' 00:03:15.216 18:02:16 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:15.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.216 --rc genhtml_branch_coverage=1 00:03:15.216 --rc genhtml_function_coverage=1 00:03:15.216 --rc genhtml_legend=1 00:03:15.216 --rc geninfo_all_blocks=1 00:03:15.216 --rc geninfo_unexecuted_blocks=1 00:03:15.216 00:03:15.216 ' 00:03:15.216 18:02:16 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:15.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.216 --rc genhtml_branch_coverage=1 00:03:15.216 --rc genhtml_function_coverage=1 00:03:15.216 --rc genhtml_legend=1 00:03:15.216 --rc geninfo_all_blocks=1 00:03:15.216 --rc geninfo_unexecuted_blocks=1 00:03:15.216 00:03:15.216 ' 00:03:15.216 18:02:16 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:15.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.216 --rc genhtml_branch_coverage=1 00:03:15.216 --rc genhtml_function_coverage=1 00:03:15.216 --rc genhtml_legend=1 00:03:15.216 --rc geninfo_all_blocks=1 00:03:15.216 --rc geninfo_unexecuted_blocks=1 00:03:15.216 00:03:15.216 ' 00:03:15.216 18:02:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:15.216 18:02:16 -- nvmf/common.sh@7 -- # uname -s 00:03:15.216 18:02:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:15.216 18:02:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:15.216 18:02:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:15.216 18:02:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:15.216 18:02:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:15.216 18:02:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:15.216 18:02:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:15.216 18:02:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:15.216 18:02:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:15.216 18:02:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:15.216 18:02:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:15.216 18:02:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:15.216 18:02:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:15.216 18:02:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:15.216 18:02:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:15.216 18:02:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:15.216 18:02:16 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:15.216 18:02:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:15.477 18:02:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:15.477 18:02:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:15.477 18:02:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:15.477 18:02:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.477 18:02:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.477 18:02:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.477 18:02:16 -- paths/export.sh@5 -- # export PATH 00:03:15.477 18:02:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.477 18:02:16 -- nvmf/common.sh@51 -- # : 0 00:03:15.477 18:02:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:15.477 18:02:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:15.477 18:02:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:15.477 18:02:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:15.477 18:02:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:15.477 18:02:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:15.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:15.477 18:02:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:15.477 18:02:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:15.477 18:02:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:15.477 18:02:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:15.477 18:02:16 -- spdk/autotest.sh@32 -- # uname -s 00:03:15.477 18:02:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:15.477 18:02:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:15.477 18:02:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:15.477 18:02:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:15.477 18:02:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:15.477 18:02:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:15.477 18:02:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:15.477 18:02:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:15.477 18:02:16 -- spdk/autotest.sh@48 -- # udevadm_pid=1725948 00:03:15.477 18:02:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:15.477 18:02:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:15.477 18:02:16 -- pm/common@17 -- # local monitor 00:03:15.477 18:02:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.477 18:02:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.477 18:02:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.477 18:02:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.477 18:02:16 -- pm/common@21 -- # date +%s 00:03:15.477 18:02:16 -- pm/common@21 -- # date +%s 00:03:15.477 18:02:16 -- pm/common@25 -- # sleep 1 00:03:15.477 18:02:16 -- pm/common@21 -- # date +%s 00:03:15.477 18:02:16 -- pm/common@21 -- # date +%s 00:03:15.477 18:02:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732035736 00:03:15.477 18:02:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732035736 00:03:15.477 18:02:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732035736 00:03:15.477 18:02:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732035736 00:03:15.477 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732035736_collect-cpu-load.pm.log 00:03:15.477 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732035736_collect-vmstat.pm.log 00:03:15.477 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732035736_collect-cpu-temp.pm.log 00:03:15.477 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732035736_collect-bmc-pm.bmc.pm.log 00:03:16.418 18:02:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:16.418 18:02:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:16.418 18:02:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:16.418 18:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:16.418 18:02:17 -- spdk/autotest.sh@59 -- # create_test_list 00:03:16.418 18:02:17 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:16.418 18:02:17 -- common/autotest_common.sh@10 -- # set +x 00:03:16.418 18:02:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:16.419 18:02:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.419 18:02:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.419 18:02:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:16.419 18:02:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:16.419 18:02:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:16.419 18:02:17 -- common/autotest_common.sh@1457 -- # uname 00:03:16.419 18:02:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:16.419 18:02:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:16.419 18:02:17 -- common/autotest_common.sh@1477 -- # uname 00:03:16.419 18:02:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:16.419 18:02:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:16.419 18:02:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:16.419 lcov: LCOV version 1.15 00:03:16.419 18:02:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:31.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:31.332 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:49.509 18:02:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:49.510 18:02:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.510 18:02:48 -- common/autotest_common.sh@10 -- # set +x 00:03:49.510 18:02:48 -- spdk/autotest.sh@78 -- # rm -f 00:03:49.510 18:02:48 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.455 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:50.455 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:50.455 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:50.455 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:50.455 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:50.455 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:50.716 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:50.716 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:51.289 18:02:52 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:51.289 18:02:52 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:51.289 18:02:52 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:51.289 18:02:52 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:51.289 18:02:52 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:51.289 18:02:52 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:51.289 18:02:52 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:51.289 18:02:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.289 18:02:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:51.289 18:02:52 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:51.289 18:02:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.289 18:02:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:51.289 18:02:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:51.289 18:02:52 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:51.289 18:02:52 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:51.289 No valid GPT data, bailing 00:03:51.289 18:02:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.289 18:02:52 -- scripts/common.sh@394 -- # pt= 00:03:51.289 18:02:52 -- scripts/common.sh@395 -- # return 1 00:03:51.289 18:02:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:51.289 1+0 records in 00:03:51.289 1+0 records out 00:03:51.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005451 s, 192 MB/s 00:03:51.289 18:02:52 -- spdk/autotest.sh@105 -- # sync 00:03:51.289 18:02:52 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:51.289 18:02:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:51.289 18:02:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:01.293 18:03:01 -- spdk/autotest.sh@111 -- # uname -s 00:04:01.293 18:03:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:01.293 18:03:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:01.293 18:03:01 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:03.207 Hugepages 00:04:03.207 node hugesize free / total 00:04:03.207 node0 1048576kB 0 / 0 00:04:03.207 node0 2048kB 0 / 0 00:04:03.207 node1 1048576kB 0 / 0 00:04:03.207 node1 2048kB 0 / 0 00:04:03.207 00:04:03.207 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.207 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:03.207 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:03.207 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:03.207 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:03.207 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:03.207 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:03.207 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:03.207 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:03.469 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:03.469 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:03.469 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:03.469 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:03.469 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:03.469 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:03.469 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:03.469 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:03.469 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:03.469 18:03:04 -- spdk/autotest.sh@117 -- # uname -s 00:04:03.469 18:03:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:03.469 18:03:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:03.469 18:03:04 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.774 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:06.774 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:07.036 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:08.950 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:09.211 18:03:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:10.154 18:03:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:10.154 18:03:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:10.154 18:03:11 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:10.154 18:03:11 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:10.154 18:03:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:10.154 18:03:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:10.154 18:03:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.154 18:03:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:10.154 18:03:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:10.154 18:03:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:10.154 18:03:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:10.154 18:03:11 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.362 Waiting for block devices as requested 00:04:14.362 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:14.362 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:14.362 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:14.362 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:14.362 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:14.362 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:14.362 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:14.362 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:14.362 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:14.624 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:14.624 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:14.624 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:14.886 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:14.886 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:14.886 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:15.147 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:15.147 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:15.408 18:03:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:15.408 18:03:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:15.408 18:03:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:15.408 18:03:16 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:15.408 18:03:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:15.408 18:03:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:15.408 18:03:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:15.408 18:03:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:15.408 18:03:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:15.408 18:03:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:15.408 18:03:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:15.408 18:03:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:15.409 18:03:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:15.409 18:03:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:15.409 18:03:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:15.409 18:03:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:15.409 18:03:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:15.409 18:03:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:15.409 18:03:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:15.409 18:03:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:15.409 18:03:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:15.409 18:03:16 -- common/autotest_common.sh@1543 -- # continue 00:04:15.409 18:03:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:15.409 18:03:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.409 18:03:16 -- common/autotest_common.sh@10 -- # set +x 00:04:15.409 18:03:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:15.409 18:03:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.409 18:03:16 -- common/autotest_common.sh@10 -- # set +x 00:04:15.670 18:03:16 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.978 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:18.978 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:18.978 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:18.978 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:18.978 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:18.978 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:18.978 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:18.978 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:18.978 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:19.239 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:19.239 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:19.239 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:19.239 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:19.239 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:19.239 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:19.239 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:19.239 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:19.500 18:03:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:19.500 18:03:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.500 18:03:20 -- common/autotest_common.sh@10 -- # set +x 00:04:19.500 18:03:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:19.500 18:03:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:19.500 18:03:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:19.500 18:03:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:19.500 18:03:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:19.500 18:03:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:19.500 18:03:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:19.500 18:03:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:19.500 18:03:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:19.501 18:03:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:19.501 18:03:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.501 18:03:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:19.501 18:03:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:19.762 18:03:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:19.762 18:03:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:19.762 18:03:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:19.762 18:03:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:19.762 18:03:21 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:19.762 18:03:21 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:19.762 18:03:21 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:19.762 18:03:21 -- common/autotest_common.sh@1572 -- # return 0 00:04:19.762 18:03:21 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:19.762 18:03:21 -- common/autotest_common.sh@1580 -- # return 0 00:04:19.762 18:03:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:19.762 18:03:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:19.762 18:03:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.762 18:03:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.762 18:03:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:19.762 18:03:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.762 18:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:19.762 18:03:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:19.762 18:03:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:19.762 18:03:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.762 18:03:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.762 18:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:19.762 ************************************ 00:04:19.762 START TEST env 00:04:19.762 ************************************ 00:04:19.762 18:03:21 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:19.762 * Looking for test storage... 00:04:19.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:19.762 18:03:21 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.762 18:03:21 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.024 18:03:21 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.024 18:03:21 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.024 18:03:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.024 18:03:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.024 18:03:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.024 18:03:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.024 18:03:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.024 18:03:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.024 18:03:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.024 18:03:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.024 18:03:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.024 18:03:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.024 18:03:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.024 18:03:21 env -- scripts/common.sh@344 -- # case "$op" in 00:04:20.024 18:03:21 env -- scripts/common.sh@345 -- # : 1 00:04:20.024 18:03:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.024 18:03:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.024 18:03:21 env -- scripts/common.sh@365 -- # decimal 1 00:04:20.024 18:03:21 env -- scripts/common.sh@353 -- # local d=1 00:04:20.024 18:03:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.024 18:03:21 env -- scripts/common.sh@355 -- # echo 1 00:04:20.024 18:03:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.024 18:03:21 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.024 18:03:21 env -- scripts/common.sh@353 -- # local d=2 00:04:20.024 18:03:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.024 18:03:21 env -- scripts/common.sh@355 -- # echo 2 00:04:20.024 18:03:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.024 18:03:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.024 18:03:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.024 18:03:21 env -- scripts/common.sh@368 -- # return 0 00:04:20.024 18:03:21 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.024 18:03:21 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.024 --rc genhtml_branch_coverage=1 00:04:20.024 --rc genhtml_function_coverage=1 00:04:20.024 --rc genhtml_legend=1 00:04:20.024 --rc geninfo_all_blocks=1 00:04:20.024 --rc geninfo_unexecuted_blocks=1 00:04:20.024 00:04:20.024 ' 00:04:20.024 18:03:21 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.024 --rc genhtml_branch_coverage=1 00:04:20.024 --rc genhtml_function_coverage=1 00:04:20.024 --rc genhtml_legend=1 00:04:20.024 --rc geninfo_all_blocks=1 00:04:20.024 --rc geninfo_unexecuted_blocks=1 00:04:20.024 00:04:20.024 ' 00:04:20.024 18:03:21 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.024 --rc genhtml_branch_coverage=1 00:04:20.024 --rc genhtml_function_coverage=1 00:04:20.024 --rc genhtml_legend=1 00:04:20.024 --rc geninfo_all_blocks=1 00:04:20.024 --rc geninfo_unexecuted_blocks=1 00:04:20.024 00:04:20.024 ' 00:04:20.024 18:03:21 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.024 --rc genhtml_branch_coverage=1 00:04:20.024 --rc genhtml_function_coverage=1 00:04:20.024 --rc genhtml_legend=1 00:04:20.024 --rc geninfo_all_blocks=1 00:04:20.024 --rc geninfo_unexecuted_blocks=1 00:04:20.025 00:04:20.025 ' 00:04:20.025 18:03:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.025 18:03:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.025 18:03:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.025 18:03:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.025 ************************************ 00:04:20.025 START TEST env_memory 00:04:20.025 ************************************ 00:04:20.025 18:03:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.025 00:04:20.025 00:04:20.025 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.025 http://cunit.sourceforge.net/ 00:04:20.025 00:04:20.025 00:04:20.025 Suite: memory 00:04:20.025 Test: alloc and free memory map ...[2024-11-19 18:03:21.420793] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.025 passed 00:04:20.025 Test: mem map translation ...[2024-11-19 18:03:21.446509] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.025 [2024-11-19 18:03:21.446538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.025 [2024-11-19 18:03:21.446584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.025 [2024-11-19 18:03:21.446592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.025 passed 00:04:20.287 Test: mem map registration ...[2024-11-19 18:03:21.501761] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.287 [2024-11-19 18:03:21.501784] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.287 passed 00:04:20.287 Test: mem map adjacent registrations ...passed 00:04:20.287 00:04:20.287 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.287 suites 1 1 n/a 0 0 00:04:20.287 tests 4 4 4 0 0 00:04:20.287 asserts 152 152 152 0 n/a 00:04:20.287 00:04:20.287 Elapsed time = 0.194 seconds 00:04:20.287 00:04:20.287 real 0m0.209s 00:04:20.287 user 0m0.200s 00:04:20.287 sys 0m0.008s 00:04:20.287 18:03:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.287 18:03:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.287 ************************************ 00:04:20.287 END TEST env_memory 00:04:20.287 ************************************ 00:04:20.287 18:03:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.287 18:03:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.287 18:03:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.287 18:03:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.287 ************************************ 00:04:20.287 START TEST env_vtophys 00:04:20.287 ************************************ 00:04:20.287 18:03:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.287 EAL: lib.eal log level changed from notice to debug 00:04:20.287 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.287 EAL: Detected lcore 1 as core 1 on socket 0 00:04:20.287 EAL: Detected lcore 2 as core 2 on socket 0 00:04:20.287 EAL: Detected lcore 3 as core 3 on socket 0 00:04:20.287 EAL: Detected lcore 4 as core 4 on socket 0 00:04:20.287 EAL: Detected lcore 5 as core 5 on socket 0 00:04:20.287 EAL: Detected lcore 6 as core 6 on socket 0 00:04:20.287 EAL: Detected lcore 7 as core 7 on socket 0 00:04:20.287 EAL: Detected lcore 8 as core 8 on socket 0 00:04:20.287 EAL: Detected lcore 9 as core 9 on socket 0 00:04:20.287 EAL: Detected lcore 10 as core 10 on socket 0 00:04:20.287 EAL: Detected lcore 11 as core 11 on socket 0 00:04:20.287 EAL: Detected lcore 12 as core 12 on socket 0 00:04:20.287 EAL: Detected lcore 13 as core 13 on socket 0 00:04:20.287 EAL: Detected lcore 14 as core 14 on socket 0 00:04:20.287 EAL: Detected lcore 15 as core 15 on socket 0 00:04:20.287 EAL: Detected lcore 16 as core 16 on socket 0 00:04:20.287 EAL: Detected lcore 17 as core 17 on socket 0 00:04:20.287 EAL: Detected lcore 18 as core 18 on socket 0 00:04:20.287 EAL: Detected lcore 19 as core 19 on socket 0 00:04:20.287 EAL: Detected lcore 20 as core 20 on socket 0 00:04:20.287 EAL: Detected lcore 21 as core 21 on socket 0 00:04:20.287 EAL: Detected lcore 22 as core 22 on socket 0 00:04:20.287 EAL: Detected lcore 23 as core 23 on socket 0 00:04:20.287 EAL: Detected lcore 24 as core 24 on socket 0 00:04:20.287 EAL: Detected lcore 25 as core 25 on socket 0 00:04:20.287 EAL: Detected lcore 26 as core 26 on socket 0 00:04:20.287 EAL: Detected lcore 27 as core 27 on socket 0 00:04:20.287 EAL: Detected lcore 28 as core 28 on socket 0 00:04:20.287 EAL: Detected lcore 29 as core 29 on socket 0 00:04:20.287 EAL: Detected lcore 30 as core 30 on socket 0 00:04:20.287 EAL: Detected lcore 31 as core 31 on socket 0 00:04:20.287 EAL: Detected lcore 32 as core 32 on socket 0 00:04:20.287 EAL: Detected lcore 33 as core 33 on socket 0 00:04:20.287 EAL: Detected lcore 34 as core 34 on socket 0 00:04:20.287 EAL: Detected lcore 35 as core 35 on socket 0 00:04:20.287 EAL: Detected lcore 36 as core 0 on socket 1 00:04:20.287 EAL: Detected lcore 37 as core 1 on socket 1 00:04:20.287 EAL: Detected lcore 38 as core 2 on socket 1 00:04:20.287 EAL: Detected lcore 39 as core 3 on socket 1 00:04:20.287 EAL: Detected lcore 40 as core 4 on socket 1 00:04:20.287 EAL: Detected lcore 41 as core 5 on socket 1 00:04:20.287 EAL: Detected lcore 42 as core 6 on socket 1 00:04:20.287 EAL: Detected lcore 43 as core 7 on socket 1 00:04:20.287 EAL: Detected lcore 44 as core 8 on socket 1 00:04:20.287 EAL: Detected lcore 45 as core 9 on socket 1 00:04:20.287 EAL: Detected lcore 46 as core 10 on socket 1 00:04:20.287 EAL: Detected lcore 47 as core 11 on socket 1 00:04:20.287 EAL: Detected lcore 48 as core 12 on socket 1 00:04:20.287 EAL: Detected lcore 49 as core 13 on socket 1 00:04:20.287 EAL: Detected lcore 50 as core 14 on socket 1 00:04:20.287 EAL: Detected lcore 51 as core 15 on socket 1 00:04:20.287 EAL: Detected lcore 52 as core 16 on socket 1 00:04:20.287 EAL: Detected lcore 53 as core 17 on socket 1 00:04:20.287 EAL: Detected lcore 54 as core 18 on socket 1 00:04:20.287 EAL: Detected lcore 55 as core 19 on socket 1 00:04:20.287 EAL: Detected lcore 56 as core 20 on socket 1 00:04:20.287 EAL: Detected lcore 57 as core 21 on socket 1 00:04:20.287 EAL: Detected lcore 58 as core 22 on socket 1 00:04:20.287 EAL: Detected lcore 59 as core 23 on socket 1 00:04:20.287 EAL: Detected lcore 60 as core 24 on socket 1 00:04:20.287 EAL: Detected lcore 61 as core 25 on socket 1 00:04:20.287 EAL: Detected lcore 62 as core 26 on socket 1 00:04:20.287 EAL: Detected lcore 63 as core 27 on socket 1 00:04:20.287 EAL: Detected lcore 64 as core 28 on socket 1 00:04:20.287 EAL: Detected lcore 65 as core 29 on socket 1 00:04:20.287 EAL: Detected lcore 66 as core 30 on socket 1 00:04:20.287 EAL: Detected lcore 67 as core 31 on socket 1 00:04:20.287 EAL: Detected lcore 68 as core 32 on socket 1 00:04:20.287 EAL: Detected lcore 69 as core 33 on socket 1 00:04:20.287 EAL: Detected lcore 70 as core 34 on socket 1 00:04:20.287 EAL: Detected lcore 71 as core 35 on socket 1 00:04:20.287 EAL: Detected lcore 72 as core 0 on socket 0 00:04:20.287 EAL: Detected lcore 73 as core 1 on socket 0 00:04:20.287 EAL: Detected lcore 74 as core 2 on socket 0 00:04:20.287 EAL: Detected lcore 75 as core 3 on socket 0 00:04:20.287 EAL: Detected lcore 76 as core 4 on socket 0 00:04:20.287 EAL: Detected lcore 77 as core 5 on socket 0 00:04:20.287 EAL: Detected lcore 78 as core 6 on socket 0 00:04:20.287 EAL: Detected lcore 79 as core 7 on socket 0 00:04:20.287 EAL: Detected lcore 80 as core 8 on socket 0 00:04:20.287 EAL: Detected lcore 81 as core 9 on socket 0 00:04:20.287 EAL: Detected lcore 82 as core 10 on socket 0 00:04:20.287 EAL: Detected lcore 83 as core 11 on socket 0 00:04:20.287 EAL: Detected lcore 84 as core 12 on socket 0 00:04:20.287 EAL: Detected lcore 85 as core 13 on socket 0 00:04:20.287 EAL: Detected lcore 86 as core 14 on socket 0 00:04:20.287 EAL: Detected lcore 87 as core 15 on socket 0 00:04:20.287 EAL: Detected lcore 88 as core 16 on socket 0 00:04:20.287 EAL: Detected lcore 89 as core 17 on socket 0 00:04:20.287 EAL: Detected lcore 90 as core 18 on socket 0 00:04:20.287 EAL: Detected lcore 91 as core 19 on socket 0 00:04:20.287 EAL: Detected lcore 92 as core 20 on socket 0 00:04:20.287 EAL: Detected lcore 93 as core 21 on socket 0 00:04:20.287 EAL: Detected lcore 94 as core 22 on socket 0 00:04:20.287 EAL: Detected lcore 95 as core 23 on socket 0 00:04:20.287 EAL: Detected lcore 96 as core 24 on socket 0 00:04:20.287 EAL: Detected lcore 97 as core 25 on socket 0 00:04:20.287 EAL: Detected lcore 98 as core 26 on socket 0 00:04:20.287 EAL: Detected lcore 99 as core 27 on socket 0 00:04:20.287 EAL: Detected lcore 100 as core 28 on socket 0 00:04:20.287 EAL: Detected lcore 101 as core 29 on socket 0 00:04:20.287 EAL: Detected lcore 102 as core 30 on socket 0 00:04:20.287 EAL: Detected lcore 103 as core 31 on socket 0 00:04:20.287 EAL: Detected lcore 104 as core 32 on socket 0 00:04:20.287 EAL: Detected lcore 105 as core 33 on socket 0 00:04:20.287 EAL: Detected lcore 106 as core 34 on socket 0 00:04:20.287 EAL: Detected lcore 107 as core 35 on socket 0 00:04:20.287 EAL: Detected lcore 108 as core 0 on socket 1 00:04:20.287 EAL: Detected lcore 109 as core 1 on socket 1 00:04:20.287 EAL: Detected lcore 110 as core 2 on socket 1 00:04:20.287 EAL: Detected lcore 111 as core 3 on socket 1 00:04:20.287 EAL: Detected lcore 112 as core 4 on socket 1 00:04:20.287 EAL: Detected lcore 113 as core 5 on socket 1 00:04:20.287 EAL: Detected lcore 114 as core 6 on socket 1 00:04:20.287 EAL: Detected lcore 115 as core 7 on socket 1 00:04:20.287 EAL: Detected lcore 116 as core 8 on socket 1 00:04:20.287 EAL: Detected lcore 117 as core 9 on socket 1 00:04:20.287 EAL: Detected lcore 118 as core 10 on socket 1 00:04:20.287 EAL: Detected lcore 119 as core 11 on socket 1 00:04:20.287 EAL: Detected lcore 120 as core 12 on socket 1 00:04:20.287 EAL: Detected lcore 121 as core 13 on socket 1 00:04:20.287 EAL: Detected lcore 122 as core 14 on socket 1 00:04:20.287 EAL: Detected lcore 123 as core 15 on socket 1 00:04:20.287 EAL: Detected lcore 124 as core 16 on socket 1 00:04:20.287 EAL: Detected lcore 125 as core 17 on socket 1 00:04:20.288 EAL: Detected lcore 126 as core 18 on socket 1 00:04:20.288 EAL: Detected lcore 127 as core 19 on socket 1 00:04:20.288 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:20.288 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:20.288 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:20.288 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:20.288 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:20.288 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:20.288 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:20.288 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:20.288 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:20.288 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:20.288 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:20.288 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:20.288 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:20.288 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:20.288 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:20.288 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:20.288 EAL: Maximum logical cores by configuration: 128 00:04:20.288 EAL: Detected CPU lcores: 128 00:04:20.288 EAL: Detected NUMA nodes: 2 00:04:20.288 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.288 EAL: Detected shared linkage of DPDK 00:04:20.288 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.288 EAL: Bus pci wants IOVA as 'DC' 00:04:20.288 EAL: Buses did not request a specific IOVA mode. 00:04:20.288 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:20.288 EAL: Selected IOVA mode 'VA' 00:04:20.288 EAL: Probing VFIO support... 00:04:20.288 EAL: IOMMU type 1 (Type 1) is supported 00:04:20.288 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:20.288 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:20.288 EAL: VFIO support initialized 00:04:20.288 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.288 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.288 EAL: Setting up physically contiguous memory... 00:04:20.288 EAL: Setting maximum number of open files to 524288 00:04:20.288 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.288 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:20.288 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.288 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.288 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.288 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.288 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.288 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.288 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.288 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.288 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.288 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.288 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.288 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.288 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.288 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.288 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.288 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.288 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.288 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.288 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.288 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.288 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.288 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.288 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.288 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.288 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.288 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:20.288 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.288 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:20.288 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.288 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.288 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:20.288 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:20.288 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.288 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:20.288 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.288 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.288 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:20.288 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:20.288 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.288 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:20.288 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.288 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.288 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:20.288 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:20.288 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.288 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:20.288 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.288 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.288 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:20.288 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:20.288 EAL: Hugepages will be freed exactly as allocated. 00:04:20.288 EAL: No shared files mode enabled, IPC is disabled 00:04:20.288 EAL: No shared files mode enabled, IPC is disabled 00:04:20.288 EAL: TSC frequency is ~2400000 KHz 00:04:20.288 EAL: Main lcore 0 is ready (tid=7f9ad82f8a00;cpuset=[0]) 00:04:20.288 EAL: Trying to obtain current memory policy. 00:04:20.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.288 EAL: Restoring previous memory policy: 0 00:04:20.288 EAL: request: mp_malloc_sync 00:04:20.288 EAL: No shared files mode enabled, IPC is disabled 00:04:20.288 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.288 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.549 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.549 00:04:20.549 00:04:20.549 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.549 http://cunit.sourceforge.net/ 00:04:20.549 00:04:20.549 00:04:20.549 Suite: components_suite 00:04:20.549 Test: vtophys_malloc_test ...passed 00:04:20.549 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.549 EAL: Restoring previous memory policy: 4 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.549 EAL: Trying to obtain current memory policy. 00:04:20.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.549 EAL: Restoring previous memory policy: 4 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.549 EAL: Trying to obtain current memory policy. 00:04:20.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.549 EAL: Restoring previous memory policy: 4 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.549 EAL: Trying to obtain current memory policy. 00:04:20.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.549 EAL: Restoring previous memory policy: 4 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.549 EAL: Trying to obtain current memory policy. 00:04:20.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.549 EAL: Restoring previous memory policy: 4 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was shrunk by 34MB 00:04:20.549 EAL: Trying to obtain current memory policy. 00:04:20.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.549 EAL: Restoring previous memory policy: 4 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was expanded by 66MB 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.549 EAL: Trying to obtain current memory policy. 00:04:20.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.549 EAL: Restoring previous memory policy: 4 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.549 EAL: Trying to obtain current memory policy. 00:04:20.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.549 EAL: Restoring previous memory policy: 4 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.549 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.549 EAL: request: mp_malloc_sync 00:04:20.549 EAL: No shared files mode enabled, IPC is disabled 00:04:20.549 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.549 EAL: Trying to obtain current memory policy. 00:04:20.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.810 EAL: Restoring previous memory policy: 4 00:04:20.810 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.810 EAL: request: mp_malloc_sync 00:04:20.810 EAL: No shared files mode enabled, IPC is disabled 00:04:20.810 EAL: Heap on socket 0 was expanded by 514MB 00:04:20.810 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.810 EAL: request: mp_malloc_sync 00:04:20.810 EAL: No shared files mode enabled, IPC is disabled 00:04:20.810 EAL: Heap on socket 0 was shrunk by 514MB 00:04:20.810 EAL: Trying to obtain current memory policy. 00:04:20.810 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.070 EAL: Restoring previous memory policy: 4 00:04:21.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.070 EAL: request: mp_malloc_sync 00:04:21.070 EAL: No shared files mode enabled, IPC is disabled 00:04:21.070 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.070 EAL: request: mp_malloc_sync 00:04:21.070 EAL: No shared files mode enabled, IPC is disabled 00:04:21.070 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.070 passed 00:04:21.070 00:04:21.070 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.070 suites 1 1 n/a 0 0 00:04:21.070 tests 2 2 2 0 0 00:04:21.070 asserts 497 497 497 0 n/a 00:04:21.070 00:04:21.070 Elapsed time = 0.686 seconds 00:04:21.070 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.070 EAL: request: mp_malloc_sync 00:04:21.070 EAL: No shared files mode enabled, IPC is disabled 00:04:21.070 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.070 EAL: No shared files mode enabled, IPC is disabled 00:04:21.070 EAL: No shared files mode enabled, IPC is disabled 00:04:21.070 EAL: No shared files mode enabled, IPC is disabled 00:04:21.070 00:04:21.070 real 0m0.839s 00:04:21.070 user 0m0.439s 00:04:21.070 sys 0m0.370s 00:04:21.070 18:03:22 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.070 18:03:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.070 ************************************ 00:04:21.070 END TEST env_vtophys 00:04:21.070 ************************************ 00:04:21.070 18:03:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.070 18:03:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.070 18:03:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.070 18:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.331 ************************************ 00:04:21.331 START TEST env_pci 00:04:21.331 ************************************ 00:04:21.331 18:03:22 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.331 00:04:21.331 00:04:21.331 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.331 http://cunit.sourceforge.net/ 00:04:21.331 00:04:21.331 00:04:21.331 Suite: pci 00:04:21.331 Test: pci_hook ...[2024-11-19 18:03:22.593681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1745925 has claimed it 00:04:21.331 EAL: Cannot find device (10000:00:01.0) 00:04:21.331 EAL: Failed to attach device on primary process 00:04:21.331 passed 00:04:21.331 00:04:21.331 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.331 suites 1 1 n/a 0 0 00:04:21.331 tests 1 1 1 0 0 00:04:21.331 asserts 25 25 25 0 n/a 00:04:21.331 00:04:21.331 Elapsed time = 0.031 seconds 00:04:21.331 00:04:21.331 real 0m0.052s 00:04:21.331 user 0m0.017s 00:04:21.331 sys 0m0.034s 00:04:21.331 18:03:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.331 18:03:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.331 ************************************ 00:04:21.331 END TEST env_pci 00:04:21.331 ************************************ 00:04:21.331 18:03:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.331 18:03:22 env -- env/env.sh@15 -- # uname 00:04:21.331 18:03:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.331 18:03:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.331 18:03:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.331 18:03:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:21.331 18:03:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.331 18:03:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.331 ************************************ 00:04:21.331 START TEST env_dpdk_post_init 00:04:21.331 ************************************ 00:04:21.331 18:03:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.331 EAL: Detected CPU lcores: 128 00:04:21.331 EAL: Detected NUMA nodes: 2 00:04:21.331 EAL: Detected shared linkage of DPDK 00:04:21.331 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.331 EAL: Selected IOVA mode 'VA' 00:04:21.331 EAL: VFIO support initialized 00:04:21.331 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.592 EAL: Using IOMMU type 1 (Type 1) 00:04:21.592 EAL: Ignore mapping IO port bar(1) 00:04:21.854 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:21.854 EAL: Ignore mapping IO port bar(1) 00:04:21.854 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:22.116 EAL: Ignore mapping IO port bar(1) 00:04:22.116 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:22.377 EAL: Ignore mapping IO port bar(1) 00:04:22.377 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:22.639 EAL: Ignore mapping IO port bar(1) 00:04:22.639 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:22.639 EAL: Ignore mapping IO port bar(1) 00:04:22.899 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:22.899 EAL: Ignore mapping IO port bar(1) 00:04:23.160 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:23.160 EAL: Ignore mapping IO port bar(1) 00:04:23.421 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:23.421 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:23.683 EAL: Ignore mapping IO port bar(1) 00:04:23.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:23.944 EAL: Ignore mapping IO port bar(1) 00:04:23.944 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:24.205 EAL: Ignore mapping IO port bar(1) 00:04:24.205 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:24.205 EAL: Ignore mapping IO port bar(1) 00:04:24.466 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:24.466 EAL: Ignore mapping IO port bar(1) 00:04:24.727 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:24.727 EAL: Ignore mapping IO port bar(1) 00:04:24.988 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:24.988 EAL: Ignore mapping IO port bar(1) 00:04:24.988 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:25.250 EAL: Ignore mapping IO port bar(1) 00:04:25.250 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:25.250 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:25.250 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:25.511 Starting DPDK initialization... 00:04:25.511 Starting SPDK post initialization... 00:04:25.511 SPDK NVMe probe 00:04:25.511 Attaching to 0000:65:00.0 00:04:25.511 Attached to 0000:65:00.0 00:04:25.511 Cleaning up... 00:04:27.429 00:04:27.429 real 0m5.750s 00:04:27.429 user 0m0.118s 00:04:27.429 sys 0m0.186s 00:04:27.429 18:03:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.429 18:03:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.429 ************************************ 00:04:27.429 END TEST env_dpdk_post_init 00:04:27.429 ************************************ 00:04:27.429 18:03:28 env -- env/env.sh@26 -- # uname 00:04:27.429 18:03:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:27.429 18:03:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.429 18:03:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.429 18:03:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.429 18:03:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.429 ************************************ 00:04:27.429 START TEST env_mem_callbacks 00:04:27.429 ************************************ 00:04:27.429 18:03:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.429 EAL: Detected CPU lcores: 128 00:04:27.429 EAL: Detected NUMA nodes: 2 00:04:27.429 EAL: Detected shared linkage of DPDK 00:04:27.429 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.429 EAL: Selected IOVA mode 'VA' 00:04:27.429 EAL: VFIO support initialized 00:04:27.429 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.429 00:04:27.429 00:04:27.429 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.429 http://cunit.sourceforge.net/ 00:04:27.429 00:04:27.429 00:04:27.429 Suite: memory 00:04:27.429 Test: test ... 00:04:27.429 register 0x200000200000 2097152 00:04:27.429 malloc 3145728 00:04:27.429 register 0x200000400000 4194304 00:04:27.429 buf 0x200000500000 len 3145728 PASSED 00:04:27.429 malloc 64 00:04:27.429 buf 0x2000004fff40 len 64 PASSED 00:04:27.429 malloc 4194304 00:04:27.429 register 0x200000800000 6291456 00:04:27.429 buf 0x200000a00000 len 4194304 PASSED 00:04:27.429 free 0x200000500000 3145728 00:04:27.429 free 0x2000004fff40 64 00:04:27.429 unregister 0x200000400000 4194304 PASSED 00:04:27.429 free 0x200000a00000 4194304 00:04:27.429 unregister 0x200000800000 6291456 PASSED 00:04:27.429 malloc 8388608 00:04:27.429 register 0x200000400000 10485760 00:04:27.429 buf 0x200000600000 len 8388608 PASSED 00:04:27.429 free 0x200000600000 8388608 00:04:27.429 unregister 0x200000400000 10485760 PASSED 00:04:27.429 passed 00:04:27.429 00:04:27.429 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.429 suites 1 1 n/a 0 0 00:04:27.429 tests 1 1 1 0 0 00:04:27.429 asserts 15 15 15 0 n/a 00:04:27.429 00:04:27.429 Elapsed time = 0.010 seconds 00:04:27.429 00:04:27.429 real 0m0.068s 00:04:27.429 user 0m0.024s 00:04:27.429 sys 0m0.043s 00:04:27.429 18:03:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.429 18:03:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:27.429 ************************************ 00:04:27.429 END TEST env_mem_callbacks 00:04:27.429 ************************************ 00:04:27.429 00:04:27.429 real 0m7.545s 00:04:27.429 user 0m1.063s 00:04:27.429 sys 0m1.039s 00:04:27.429 18:03:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.429 18:03:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.429 ************************************ 00:04:27.429 END TEST env 00:04:27.429 ************************************ 00:04:27.429 18:03:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:27.429 18:03:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.429 18:03:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.429 18:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:27.429 ************************************ 00:04:27.429 START TEST rpc 00:04:27.429 ************************************ 00:04:27.429 18:03:28 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:27.429 * Looking for test storage... 00:04:27.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.429 18:03:28 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.429 18:03:28 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.429 18:03:28 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.690 18:03:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.690 18:03:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.690 18:03:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.690 18:03:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.690 18:03:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.690 18:03:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.690 18:03:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.690 18:03:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.690 18:03:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.690 18:03:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.690 18:03:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.690 18:03:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.690 18:03:28 rpc -- scripts/common.sh@345 -- # : 1 00:04:27.690 18:03:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.690 18:03:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.690 18:03:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.690 18:03:28 rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.690 18:03:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.690 18:03:28 rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.690 18:03:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.690 18:03:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.690 18:03:28 rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.690 18:03:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.690 18:03:28 rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.690 18:03:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.690 18:03:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.690 18:03:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.690 18:03:28 rpc -- scripts/common.sh@368 -- # return 0 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.690 --rc genhtml_branch_coverage=1 00:04:27.690 --rc genhtml_function_coverage=1 00:04:27.690 --rc genhtml_legend=1 00:04:27.690 --rc geninfo_all_blocks=1 00:04:27.690 --rc geninfo_unexecuted_blocks=1 00:04:27.690 00:04:27.690 ' 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.690 --rc genhtml_branch_coverage=1 00:04:27.690 --rc genhtml_function_coverage=1 00:04:27.690 --rc genhtml_legend=1 00:04:27.690 --rc geninfo_all_blocks=1 00:04:27.690 --rc geninfo_unexecuted_blocks=1 00:04:27.690 00:04:27.690 ' 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.690 --rc genhtml_branch_coverage=1 00:04:27.690 --rc genhtml_function_coverage=1 00:04:27.690 --rc genhtml_legend=1 00:04:27.690 --rc geninfo_all_blocks=1 00:04:27.690 --rc geninfo_unexecuted_blocks=1 00:04:27.690 00:04:27.690 ' 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.690 --rc genhtml_branch_coverage=1 00:04:27.690 --rc genhtml_function_coverage=1 00:04:27.690 --rc genhtml_legend=1 00:04:27.690 --rc geninfo_all_blocks=1 00:04:27.690 --rc geninfo_unexecuted_blocks=1 00:04:27.690 00:04:27.690 ' 00:04:27.690 18:03:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1747265 00:04:27.690 18:03:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.690 18:03:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:27.690 18:03:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1747265 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 1747265 ']' 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.690 18:03:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.690 [2024-11-19 18:03:29.018459] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:27.690 [2024-11-19 18:03:29.018530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747265 ] 00:04:27.690 [2024-11-19 18:03:29.109922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.950 [2024-11-19 18:03:29.161430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:27.950 [2024-11-19 18:03:29.161487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1747265' to capture a snapshot of events at runtime. 00:04:27.950 [2024-11-19 18:03:29.161496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:27.950 [2024-11-19 18:03:29.161504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:27.950 [2024-11-19 18:03:29.161511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1747265 for offline analysis/debug. 00:04:27.950 [2024-11-19 18:03:29.162280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.520 18:03:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.520 18:03:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:28.520 18:03:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.520 18:03:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.520 18:03:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:28.520 18:03:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:28.520 18:03:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.520 18:03:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.520 18:03:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.520 ************************************ 00:04:28.520 START TEST rpc_integrity 00:04:28.520 ************************************ 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:28.520 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.520 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.520 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.520 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.520 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.520 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:28.520 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.520 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.520 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.520 { 00:04:28.520 "name": "Malloc0", 00:04:28.520 "aliases": [ 00:04:28.520 "f97d9b9e-b11c-42be-9ae5-e5bb868f4886" 00:04:28.520 ], 00:04:28.520 "product_name": "Malloc disk", 00:04:28.520 "block_size": 512, 00:04:28.520 "num_blocks": 16384, 00:04:28.520 "uuid": "f97d9b9e-b11c-42be-9ae5-e5bb868f4886", 00:04:28.520 "assigned_rate_limits": { 00:04:28.520 "rw_ios_per_sec": 0, 00:04:28.520 "rw_mbytes_per_sec": 0, 00:04:28.520 "r_mbytes_per_sec": 0, 00:04:28.520 "w_mbytes_per_sec": 0 00:04:28.520 }, 00:04:28.520 "claimed": false, 00:04:28.520 "zoned": false, 00:04:28.520 "supported_io_types": { 00:04:28.520 "read": true, 00:04:28.520 "write": true, 00:04:28.520 "unmap": true, 00:04:28.520 "flush": true, 00:04:28.520 "reset": true, 00:04:28.520 "nvme_admin": false, 00:04:28.520 "nvme_io": false, 00:04:28.520 "nvme_io_md": false, 00:04:28.520 "write_zeroes": true, 00:04:28.520 "zcopy": true, 00:04:28.520 "get_zone_info": false, 00:04:28.520 "zone_management": false, 00:04:28.520 "zone_append": false, 00:04:28.520 "compare": false, 00:04:28.520 "compare_and_write": false, 00:04:28.520 "abort": true, 00:04:28.520 "seek_hole": false, 00:04:28.520 "seek_data": false, 00:04:28.521 "copy": true, 00:04:28.521 "nvme_iov_md": false 00:04:28.521 }, 00:04:28.521 "memory_domains": [ 00:04:28.521 { 00:04:28.521 "dma_device_id": "system", 00:04:28.521 "dma_device_type": 1 00:04:28.521 }, 00:04:28.521 { 00:04:28.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.521 "dma_device_type": 2 00:04:28.521 } 00:04:28.521 ], 00:04:28.521 "driver_specific": {} 00:04:28.521 } 00:04:28.521 ]' 00:04:28.521 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.521 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.521 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:28.521 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.521 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.521 [2024-11-19 18:03:29.988212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:28.521 [2024-11-19 18:03:29.988262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.521 [2024-11-19 18:03:29.988277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2359db0 00:04:28.521 [2024-11-19 18:03:29.988285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.781 [2024-11-19 18:03:29.989812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.781 [2024-11-19 18:03:29.989850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.781 Passthru0 00:04:28.781 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.781 18:03:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.781 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.781 18:03:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.781 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.781 18:03:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.781 { 00:04:28.781 "name": "Malloc0", 00:04:28.781 "aliases": [ 00:04:28.781 "f97d9b9e-b11c-42be-9ae5-e5bb868f4886" 00:04:28.781 ], 00:04:28.781 "product_name": "Malloc disk", 00:04:28.781 "block_size": 512, 00:04:28.781 "num_blocks": 16384, 00:04:28.781 "uuid": "f97d9b9e-b11c-42be-9ae5-e5bb868f4886", 00:04:28.781 "assigned_rate_limits": { 00:04:28.781 "rw_ios_per_sec": 0, 00:04:28.781 "rw_mbytes_per_sec": 0, 00:04:28.781 "r_mbytes_per_sec": 0, 00:04:28.781 "w_mbytes_per_sec": 0 00:04:28.781 }, 00:04:28.781 "claimed": true, 00:04:28.781 "claim_type": "exclusive_write", 00:04:28.781 "zoned": false, 00:04:28.781 "supported_io_types": { 00:04:28.781 "read": true, 00:04:28.781 "write": true, 00:04:28.781 "unmap": true, 00:04:28.781 "flush": true, 00:04:28.781 "reset": true, 00:04:28.781 "nvme_admin": false, 00:04:28.781 "nvme_io": false, 00:04:28.781 "nvme_io_md": false, 00:04:28.781 "write_zeroes": true, 00:04:28.781 "zcopy": true, 00:04:28.781 "get_zone_info": false, 00:04:28.781 "zone_management": false, 00:04:28.781 "zone_append": false, 00:04:28.781 "compare": false, 00:04:28.781 "compare_and_write": false, 00:04:28.781 "abort": true, 00:04:28.781 "seek_hole": false, 00:04:28.781 "seek_data": false, 00:04:28.781 "copy": true, 00:04:28.781 "nvme_iov_md": false 00:04:28.781 }, 00:04:28.781 "memory_domains": [ 00:04:28.781 { 00:04:28.781 "dma_device_id": "system", 00:04:28.781 "dma_device_type": 1 00:04:28.781 }, 00:04:28.781 { 00:04:28.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.781 "dma_device_type": 2 00:04:28.781 } 00:04:28.781 ], 00:04:28.781 "driver_specific": {} 00:04:28.781 }, 00:04:28.781 { 00:04:28.781 "name": "Passthru0", 00:04:28.781 "aliases": [ 00:04:28.781 "421f731b-33e9-528b-a804-f1b52f567cca" 00:04:28.781 ], 00:04:28.781 "product_name": "passthru", 00:04:28.781 "block_size": 512, 00:04:28.781 "num_blocks": 16384, 00:04:28.781 "uuid": "421f731b-33e9-528b-a804-f1b52f567cca", 00:04:28.781 "assigned_rate_limits": { 00:04:28.781 "rw_ios_per_sec": 0, 00:04:28.781 "rw_mbytes_per_sec": 0, 00:04:28.781 "r_mbytes_per_sec": 0, 00:04:28.781 "w_mbytes_per_sec": 0 00:04:28.781 }, 00:04:28.781 "claimed": false, 00:04:28.781 "zoned": false, 00:04:28.781 "supported_io_types": { 00:04:28.782 "read": true, 00:04:28.782 "write": true, 00:04:28.782 "unmap": true, 00:04:28.782 "flush": true, 00:04:28.782 "reset": true, 00:04:28.782 "nvme_admin": false, 00:04:28.782 "nvme_io": false, 00:04:28.782 "nvme_io_md": false, 00:04:28.782 "write_zeroes": true, 00:04:28.782 "zcopy": true, 00:04:28.782 "get_zone_info": false, 00:04:28.782 "zone_management": false, 00:04:28.782 "zone_append": false, 00:04:28.782 "compare": false, 00:04:28.782 "compare_and_write": false, 00:04:28.782 "abort": true, 00:04:28.782 "seek_hole": false, 00:04:28.782 "seek_data": false, 00:04:28.782 "copy": true, 00:04:28.782 "nvme_iov_md": false 00:04:28.782 }, 00:04:28.782 "memory_domains": [ 00:04:28.782 { 00:04:28.782 "dma_device_id": "system", 00:04:28.782 "dma_device_type": 1 00:04:28.782 }, 00:04:28.782 { 00:04:28.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.782 "dma_device_type": 2 00:04:28.782 } 00:04:28.782 ], 00:04:28.782 "driver_specific": { 00:04:28.782 "passthru": { 00:04:28.782 "name": "Passthru0", 00:04:28.782 "base_bdev_name": "Malloc0" 00:04:28.782 } 00:04:28.782 } 00:04:28.782 } 00:04:28.782 ]' 00:04:28.782 18:03:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.782 18:03:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.782 18:03:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.782 18:03:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.782 18:03:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.782 18:03:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.782 18:03:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.782 18:03:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.782 00:04:28.782 real 0m0.299s 00:04:28.782 user 0m0.190s 00:04:28.782 sys 0m0.035s 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.782 18:03:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.782 ************************************ 00:04:28.782 END TEST rpc_integrity 00:04:28.782 ************************************ 00:04:28.782 18:03:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:28.782 18:03:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.782 18:03:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.782 18:03:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.782 ************************************ 00:04:28.782 START TEST rpc_plugins 00:04:28.782 ************************************ 00:04:28.782 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:28.782 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:28.782 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.782 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.782 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.782 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:28.782 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:28.782 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.782 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.042 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.042 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:29.042 { 00:04:29.042 "name": "Malloc1", 00:04:29.042 "aliases": [ 00:04:29.042 "4398a892-66cd-49b0-b6be-349b4b8c5480" 00:04:29.042 ], 00:04:29.042 "product_name": "Malloc disk", 00:04:29.042 "block_size": 4096, 00:04:29.042 "num_blocks": 256, 00:04:29.042 "uuid": "4398a892-66cd-49b0-b6be-349b4b8c5480", 00:04:29.042 "assigned_rate_limits": { 00:04:29.042 "rw_ios_per_sec": 0, 00:04:29.042 "rw_mbytes_per_sec": 0, 00:04:29.042 "r_mbytes_per_sec": 0, 00:04:29.042 "w_mbytes_per_sec": 0 00:04:29.042 }, 00:04:29.042 "claimed": false, 00:04:29.042 "zoned": false, 00:04:29.042 "supported_io_types": { 00:04:29.042 "read": true, 00:04:29.042 "write": true, 00:04:29.042 "unmap": true, 00:04:29.042 "flush": true, 00:04:29.042 "reset": true, 00:04:29.042 "nvme_admin": false, 00:04:29.042 "nvme_io": false, 00:04:29.042 "nvme_io_md": false, 00:04:29.042 "write_zeroes": true, 00:04:29.042 "zcopy": true, 00:04:29.042 "get_zone_info": false, 00:04:29.042 "zone_management": false, 00:04:29.042 "zone_append": false, 00:04:29.042 "compare": false, 00:04:29.042 "compare_and_write": false, 00:04:29.042 "abort": true, 00:04:29.042 "seek_hole": false, 00:04:29.042 "seek_data": false, 00:04:29.042 "copy": true, 00:04:29.042 "nvme_iov_md": false 00:04:29.042 }, 00:04:29.042 "memory_domains": [ 00:04:29.042 { 00:04:29.042 "dma_device_id": "system", 00:04:29.042 "dma_device_type": 1 00:04:29.042 }, 00:04:29.042 { 00:04:29.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.042 "dma_device_type": 2 00:04:29.042 } 00:04:29.042 ], 00:04:29.042 "driver_specific": {} 00:04:29.042 } 00:04:29.042 ]' 00:04:29.042 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:29.042 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:29.042 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:29.042 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.042 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.043 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.043 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:29.043 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.043 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.043 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.043 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:29.043 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:29.043 18:03:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:29.043 00:04:29.043 real 0m0.158s 00:04:29.043 user 0m0.096s 00:04:29.043 sys 0m0.023s 00:04:29.043 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.043 18:03:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.043 ************************************ 00:04:29.043 END TEST rpc_plugins 00:04:29.043 ************************************ 00:04:29.043 18:03:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:29.043 18:03:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.043 18:03:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.043 18:03:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.043 ************************************ 00:04:29.043 START TEST rpc_trace_cmd_test 00:04:29.043 ************************************ 00:04:29.043 18:03:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:29.043 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:29.043 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:29.043 18:03:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.043 18:03:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.043 18:03:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.043 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:29.043 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1747265", 00:04:29.043 "tpoint_group_mask": "0x8", 00:04:29.043 "iscsi_conn": { 00:04:29.043 "mask": "0x2", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "scsi": { 00:04:29.043 "mask": "0x4", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "bdev": { 00:04:29.043 "mask": "0x8", 00:04:29.043 "tpoint_mask": "0xffffffffffffffff" 00:04:29.043 }, 00:04:29.043 "nvmf_rdma": { 00:04:29.043 "mask": "0x10", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "nvmf_tcp": { 00:04:29.043 "mask": "0x20", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "ftl": { 00:04:29.043 "mask": "0x40", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "blobfs": { 00:04:29.043 "mask": "0x80", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "dsa": { 00:04:29.043 "mask": "0x200", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "thread": { 00:04:29.043 "mask": "0x400", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "nvme_pcie": { 00:04:29.043 "mask": "0x800", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "iaa": { 00:04:29.043 "mask": "0x1000", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "nvme_tcp": { 00:04:29.043 "mask": "0x2000", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "bdev_nvme": { 00:04:29.043 "mask": "0x4000", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "sock": { 00:04:29.043 "mask": "0x8000", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "blob": { 00:04:29.043 "mask": "0x10000", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "bdev_raid": { 00:04:29.043 "mask": "0x20000", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 }, 00:04:29.043 "scheduler": { 00:04:29.043 "mask": "0x40000", 00:04:29.043 "tpoint_mask": "0x0" 00:04:29.043 } 00:04:29.043 }' 00:04:29.043 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:29.304 00:04:29.304 real 0m0.255s 00:04:29.304 user 0m0.208s 00:04:29.304 sys 0m0.038s 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.304 18:03:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.304 ************************************ 00:04:29.304 END TEST rpc_trace_cmd_test 00:04:29.304 ************************************ 00:04:29.304 18:03:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:29.304 18:03:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:29.304 18:03:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:29.304 18:03:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.304 18:03:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.304 18:03:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.566 ************************************ 00:04:29.566 START TEST rpc_daemon_integrity 00:04:29.566 ************************************ 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.566 { 00:04:29.566 "name": "Malloc2", 00:04:29.566 "aliases": [ 00:04:29.566 "742426a5-1649-4e96-8de1-2e15900e94c0" 00:04:29.566 ], 00:04:29.566 "product_name": "Malloc disk", 00:04:29.566 "block_size": 512, 00:04:29.566 "num_blocks": 16384, 00:04:29.566 "uuid": "742426a5-1649-4e96-8de1-2e15900e94c0", 00:04:29.566 "assigned_rate_limits": { 00:04:29.566 "rw_ios_per_sec": 0, 00:04:29.566 "rw_mbytes_per_sec": 0, 00:04:29.566 "r_mbytes_per_sec": 0, 00:04:29.566 "w_mbytes_per_sec": 0 00:04:29.566 }, 00:04:29.566 "claimed": false, 00:04:29.566 "zoned": false, 00:04:29.566 "supported_io_types": { 00:04:29.566 "read": true, 00:04:29.566 "write": true, 00:04:29.566 "unmap": true, 00:04:29.566 "flush": true, 00:04:29.566 "reset": true, 00:04:29.566 "nvme_admin": false, 00:04:29.566 "nvme_io": false, 00:04:29.566 "nvme_io_md": false, 00:04:29.566 "write_zeroes": true, 00:04:29.566 "zcopy": true, 00:04:29.566 "get_zone_info": false, 00:04:29.566 "zone_management": false, 00:04:29.566 "zone_append": false, 00:04:29.566 "compare": false, 00:04:29.566 "compare_and_write": false, 00:04:29.566 "abort": true, 00:04:29.566 "seek_hole": false, 00:04:29.566 "seek_data": false, 00:04:29.566 "copy": true, 00:04:29.566 "nvme_iov_md": false 00:04:29.566 }, 00:04:29.566 "memory_domains": [ 00:04:29.566 { 00:04:29.566 "dma_device_id": "system", 00:04:29.566 "dma_device_type": 1 00:04:29.566 }, 00:04:29.566 { 00:04:29.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.566 "dma_device_type": 2 00:04:29.566 } 00:04:29.566 ], 00:04:29.566 "driver_specific": {} 00:04:29.566 } 00:04:29.566 ]' 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.566 [2024-11-19 18:03:30.946801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:29.566 [2024-11-19 18:03:30.946852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.566 [2024-11-19 18:03:30.946869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x248a8d0 00:04:29.566 [2024-11-19 18:03:30.946878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.566 [2024-11-19 18:03:30.948348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.566 [2024-11-19 18:03:30.948385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.566 Passthru0 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.566 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.566 { 00:04:29.566 "name": "Malloc2", 00:04:29.566 "aliases": [ 00:04:29.566 "742426a5-1649-4e96-8de1-2e15900e94c0" 00:04:29.566 ], 00:04:29.566 "product_name": "Malloc disk", 00:04:29.566 "block_size": 512, 00:04:29.566 "num_blocks": 16384, 00:04:29.566 "uuid": "742426a5-1649-4e96-8de1-2e15900e94c0", 00:04:29.566 "assigned_rate_limits": { 00:04:29.566 "rw_ios_per_sec": 0, 00:04:29.566 "rw_mbytes_per_sec": 0, 00:04:29.566 "r_mbytes_per_sec": 0, 00:04:29.566 "w_mbytes_per_sec": 0 00:04:29.566 }, 00:04:29.566 "claimed": true, 00:04:29.566 "claim_type": "exclusive_write", 00:04:29.566 "zoned": false, 00:04:29.566 "supported_io_types": { 00:04:29.566 "read": true, 00:04:29.566 "write": true, 00:04:29.566 "unmap": true, 00:04:29.566 "flush": true, 00:04:29.566 "reset": true, 00:04:29.566 "nvme_admin": false, 00:04:29.566 "nvme_io": false, 00:04:29.566 "nvme_io_md": false, 00:04:29.566 "write_zeroes": true, 00:04:29.566 "zcopy": true, 00:04:29.566 "get_zone_info": false, 00:04:29.566 "zone_management": false, 00:04:29.566 "zone_append": false, 00:04:29.566 "compare": false, 00:04:29.566 "compare_and_write": false, 00:04:29.566 "abort": true, 00:04:29.566 "seek_hole": false, 00:04:29.566 "seek_data": false, 00:04:29.566 "copy": true, 00:04:29.566 "nvme_iov_md": false 00:04:29.566 }, 00:04:29.566 "memory_domains": [ 00:04:29.566 { 00:04:29.566 "dma_device_id": "system", 00:04:29.566 "dma_device_type": 1 00:04:29.566 }, 00:04:29.566 { 00:04:29.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.566 "dma_device_type": 2 00:04:29.566 } 00:04:29.566 ], 00:04:29.566 "driver_specific": {} 00:04:29.566 }, 00:04:29.566 { 00:04:29.566 "name": "Passthru0", 00:04:29.566 "aliases": [ 00:04:29.566 "074bb4d3-eab0-58bf-959b-415417d616af" 00:04:29.566 ], 00:04:29.566 "product_name": "passthru", 00:04:29.566 "block_size": 512, 00:04:29.566 "num_blocks": 16384, 00:04:29.566 "uuid": "074bb4d3-eab0-58bf-959b-415417d616af", 00:04:29.566 "assigned_rate_limits": { 00:04:29.566 "rw_ios_per_sec": 0, 00:04:29.566 "rw_mbytes_per_sec": 0, 00:04:29.566 "r_mbytes_per_sec": 0, 00:04:29.566 "w_mbytes_per_sec": 0 00:04:29.566 }, 00:04:29.566 "claimed": false, 00:04:29.566 "zoned": false, 00:04:29.566 "supported_io_types": { 00:04:29.566 "read": true, 00:04:29.566 "write": true, 00:04:29.566 "unmap": true, 00:04:29.566 "flush": true, 00:04:29.566 "reset": true, 00:04:29.566 "nvme_admin": false, 00:04:29.566 "nvme_io": false, 00:04:29.566 "nvme_io_md": false, 00:04:29.566 "write_zeroes": true, 00:04:29.566 "zcopy": true, 00:04:29.566 "get_zone_info": false, 00:04:29.566 "zone_management": false, 00:04:29.567 "zone_append": false, 00:04:29.567 "compare": false, 00:04:29.567 "compare_and_write": false, 00:04:29.567 "abort": true, 00:04:29.567 "seek_hole": false, 00:04:29.567 "seek_data": false, 00:04:29.567 "copy": true, 00:04:29.567 "nvme_iov_md": false 00:04:29.567 }, 00:04:29.567 "memory_domains": [ 00:04:29.567 { 00:04:29.567 "dma_device_id": "system", 00:04:29.567 "dma_device_type": 1 00:04:29.567 }, 00:04:29.567 { 00:04:29.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.567 "dma_device_type": 2 00:04:29.567 } 00:04:29.567 ], 00:04:29.567 "driver_specific": { 00:04:29.567 "passthru": { 00:04:29.567 "name": "Passthru0", 00:04:29.567 "base_bdev_name": "Malloc2" 00:04:29.567 } 00:04:29.567 } 00:04:29.567 } 00:04:29.567 ]' 00:04:29.567 18:03:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:29.567 18:03:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.567 18:03:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.567 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.567 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.829 00:04:29.829 real 0m0.305s 00:04:29.829 user 0m0.197s 00:04:29.829 sys 0m0.042s 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.829 18:03:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.829 ************************************ 00:04:29.829 END TEST rpc_daemon_integrity 00:04:29.829 ************************************ 00:04:29.829 18:03:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:29.829 18:03:31 rpc -- rpc/rpc.sh@84 -- # killprocess 1747265 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@954 -- # '[' -z 1747265 ']' 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@958 -- # kill -0 1747265 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@959 -- # uname 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1747265 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1747265' 00:04:29.829 killing process with pid 1747265 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@973 -- # kill 1747265 00:04:29.829 18:03:31 rpc -- common/autotest_common.sh@978 -- # wait 1747265 00:04:30.091 00:04:30.091 real 0m2.705s 00:04:30.091 user 0m3.470s 00:04:30.091 sys 0m0.811s 00:04:30.091 18:03:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.091 18:03:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.091 ************************************ 00:04:30.091 END TEST rpc 00:04:30.091 ************************************ 00:04:30.091 18:03:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:30.091 18:03:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.091 18:03:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.091 18:03:31 -- common/autotest_common.sh@10 -- # set +x 00:04:30.091 ************************************ 00:04:30.091 START TEST skip_rpc 00:04:30.091 ************************************ 00:04:30.091 18:03:31 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:30.352 * Looking for test storage... 00:04:30.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.352 18:03:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.352 --rc genhtml_branch_coverage=1 00:04:30.352 --rc genhtml_function_coverage=1 00:04:30.352 --rc genhtml_legend=1 00:04:30.352 --rc geninfo_all_blocks=1 00:04:30.352 --rc geninfo_unexecuted_blocks=1 00:04:30.352 00:04:30.352 ' 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.352 --rc genhtml_branch_coverage=1 00:04:30.352 --rc genhtml_function_coverage=1 00:04:30.352 --rc genhtml_legend=1 00:04:30.352 --rc geninfo_all_blocks=1 00:04:30.352 --rc geninfo_unexecuted_blocks=1 00:04:30.352 00:04:30.352 ' 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.352 --rc genhtml_branch_coverage=1 00:04:30.352 --rc genhtml_function_coverage=1 00:04:30.352 --rc genhtml_legend=1 00:04:30.352 --rc geninfo_all_blocks=1 00:04:30.352 --rc geninfo_unexecuted_blocks=1 00:04:30.352 00:04:30.352 ' 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.352 --rc genhtml_branch_coverage=1 00:04:30.352 --rc genhtml_function_coverage=1 00:04:30.352 --rc genhtml_legend=1 00:04:30.352 --rc geninfo_all_blocks=1 00:04:30.352 --rc geninfo_unexecuted_blocks=1 00:04:30.352 00:04:30.352 ' 00:04:30.352 18:03:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.352 18:03:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:30.352 18:03:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.352 18:03:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.352 ************************************ 00:04:30.352 START TEST skip_rpc 00:04:30.352 ************************************ 00:04:30.352 18:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:30.352 18:03:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1748112 00:04:30.352 18:03:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.352 18:03:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:30.352 18:03:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:30.613 [2024-11-19 18:03:31.840280] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:30.613 [2024-11-19 18:03:31.840341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748112 ] 00:04:30.613 [2024-11-19 18:03:31.931425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.613 [2024-11-19 18:03:31.983705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.901 18:03:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1748112 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1748112 ']' 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1748112 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1748112 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1748112' 00:04:35.902 killing process with pid 1748112 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1748112 00:04:35.902 18:03:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1748112 00:04:35.902 00:04:35.902 real 0m5.263s 00:04:35.902 user 0m5.022s 00:04:35.902 sys 0m0.292s 00:04:35.902 18:03:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.902 18:03:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.902 ************************************ 00:04:35.902 END TEST skip_rpc 00:04:35.902 ************************************ 00:04:35.902 18:03:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:35.902 18:03:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.902 18:03:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.902 18:03:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.902 ************************************ 00:04:35.902 START TEST skip_rpc_with_json 00:04:35.902 ************************************ 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1749153 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1749153 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1749153 ']' 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.902 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.902 [2024-11-19 18:03:37.182917] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:35.902 [2024-11-19 18:03:37.182973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749153 ] 00:04:35.902 [2024-11-19 18:03:37.266945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.902 [2024-11-19 18:03:37.301072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.847 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.847 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:36.847 18:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:36.847 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.847 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.847 [2024-11-19 18:03:37.988250] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:36.847 request: 00:04:36.847 { 00:04:36.847 "trtype": "tcp", 00:04:36.847 "method": "nvmf_get_transports", 00:04:36.847 "req_id": 1 00:04:36.847 } 00:04:36.847 Got JSON-RPC error response 00:04:36.847 response: 00:04:36.847 { 00:04:36.847 "code": -19, 00:04:36.847 "message": "No such device" 00:04:36.847 } 00:04:36.847 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:36.847 18:03:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:36.847 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.847 18:03:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.847 [2024-11-19 18:03:38.000347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.847 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.847 18:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:36.847 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.847 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.847 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.847 18:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.847 { 00:04:36.847 "subsystems": [ 00:04:36.847 { 00:04:36.847 "subsystem": "fsdev", 00:04:36.847 "config": [ 00:04:36.847 { 00:04:36.847 "method": "fsdev_set_opts", 00:04:36.847 "params": { 00:04:36.847 "fsdev_io_pool_size": 65535, 00:04:36.847 "fsdev_io_cache_size": 256 00:04:36.847 } 00:04:36.847 } 00:04:36.847 ] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "vfio_user_target", 00:04:36.847 "config": null 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "keyring", 00:04:36.847 "config": [] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "iobuf", 00:04:36.847 "config": [ 00:04:36.847 { 00:04:36.847 "method": "iobuf_set_options", 00:04:36.847 "params": { 00:04:36.847 "small_pool_count": 8192, 00:04:36.847 "large_pool_count": 1024, 00:04:36.847 "small_bufsize": 8192, 00:04:36.847 "large_bufsize": 135168, 00:04:36.847 "enable_numa": false 00:04:36.847 } 00:04:36.847 } 00:04:36.847 ] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "sock", 00:04:36.847 "config": [ 00:04:36.847 { 00:04:36.847 "method": "sock_set_default_impl", 00:04:36.847 "params": { 00:04:36.847 "impl_name": "posix" 00:04:36.847 } 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "method": "sock_impl_set_options", 00:04:36.847 "params": { 00:04:36.847 "impl_name": "ssl", 00:04:36.847 "recv_buf_size": 4096, 00:04:36.847 "send_buf_size": 4096, 00:04:36.847 "enable_recv_pipe": true, 00:04:36.847 "enable_quickack": false, 00:04:36.847 "enable_placement_id": 0, 00:04:36.847 "enable_zerocopy_send_server": true, 00:04:36.847 "enable_zerocopy_send_client": false, 00:04:36.847 "zerocopy_threshold": 0, 00:04:36.847 "tls_version": 0, 00:04:36.847 "enable_ktls": false 00:04:36.847 } 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "method": "sock_impl_set_options", 00:04:36.847 "params": { 00:04:36.847 "impl_name": "posix", 00:04:36.847 "recv_buf_size": 2097152, 00:04:36.847 "send_buf_size": 2097152, 00:04:36.847 "enable_recv_pipe": true, 00:04:36.847 "enable_quickack": false, 00:04:36.847 "enable_placement_id": 0, 00:04:36.847 "enable_zerocopy_send_server": true, 00:04:36.847 "enable_zerocopy_send_client": false, 00:04:36.847 "zerocopy_threshold": 0, 00:04:36.847 "tls_version": 0, 00:04:36.847 "enable_ktls": false 00:04:36.847 } 00:04:36.847 } 00:04:36.847 ] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "vmd", 00:04:36.847 "config": [] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "accel", 00:04:36.847 "config": [ 00:04:36.847 { 00:04:36.847 "method": "accel_set_options", 00:04:36.847 "params": { 00:04:36.847 "small_cache_size": 128, 00:04:36.847 "large_cache_size": 16, 00:04:36.847 "task_count": 2048, 00:04:36.847 "sequence_count": 2048, 00:04:36.847 "buf_count": 2048 00:04:36.847 } 00:04:36.847 } 00:04:36.847 ] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "bdev", 00:04:36.847 "config": [ 00:04:36.847 { 00:04:36.847 "method": "bdev_set_options", 00:04:36.847 "params": { 00:04:36.847 "bdev_io_pool_size": 65535, 00:04:36.847 "bdev_io_cache_size": 256, 00:04:36.847 "bdev_auto_examine": true, 00:04:36.847 "iobuf_small_cache_size": 128, 00:04:36.847 "iobuf_large_cache_size": 16 00:04:36.847 } 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "method": "bdev_raid_set_options", 00:04:36.847 "params": { 00:04:36.847 "process_window_size_kb": 1024, 00:04:36.847 "process_max_bandwidth_mb_sec": 0 00:04:36.847 } 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "method": "bdev_iscsi_set_options", 00:04:36.847 "params": { 00:04:36.847 "timeout_sec": 30 00:04:36.847 } 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "method": "bdev_nvme_set_options", 00:04:36.847 "params": { 00:04:36.847 "action_on_timeout": "none", 00:04:36.847 "timeout_us": 0, 00:04:36.847 "timeout_admin_us": 0, 00:04:36.847 "keep_alive_timeout_ms": 10000, 00:04:36.847 "arbitration_burst": 0, 00:04:36.847 "low_priority_weight": 0, 00:04:36.847 "medium_priority_weight": 0, 00:04:36.847 "high_priority_weight": 0, 00:04:36.847 "nvme_adminq_poll_period_us": 10000, 00:04:36.847 "nvme_ioq_poll_period_us": 0, 00:04:36.847 "io_queue_requests": 0, 00:04:36.847 "delay_cmd_submit": true, 00:04:36.847 "transport_retry_count": 4, 00:04:36.847 "bdev_retry_count": 3, 00:04:36.847 "transport_ack_timeout": 0, 00:04:36.847 "ctrlr_loss_timeout_sec": 0, 00:04:36.847 "reconnect_delay_sec": 0, 00:04:36.847 "fast_io_fail_timeout_sec": 0, 00:04:36.847 "disable_auto_failback": false, 00:04:36.847 "generate_uuids": false, 00:04:36.847 "transport_tos": 0, 00:04:36.847 "nvme_error_stat": false, 00:04:36.847 "rdma_srq_size": 0, 00:04:36.847 "io_path_stat": false, 00:04:36.847 "allow_accel_sequence": false, 00:04:36.847 "rdma_max_cq_size": 0, 00:04:36.847 "rdma_cm_event_timeout_ms": 0, 00:04:36.847 "dhchap_digests": [ 00:04:36.847 "sha256", 00:04:36.847 "sha384", 00:04:36.847 "sha512" 00:04:36.847 ], 00:04:36.847 "dhchap_dhgroups": [ 00:04:36.847 "null", 00:04:36.847 "ffdhe2048", 00:04:36.847 "ffdhe3072", 00:04:36.847 "ffdhe4096", 00:04:36.847 "ffdhe6144", 00:04:36.847 "ffdhe8192" 00:04:36.847 ] 00:04:36.847 } 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "method": "bdev_nvme_set_hotplug", 00:04:36.847 "params": { 00:04:36.847 "period_us": 100000, 00:04:36.847 "enable": false 00:04:36.847 } 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "method": "bdev_wait_for_examine" 00:04:36.847 } 00:04:36.847 ] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "scsi", 00:04:36.847 "config": null 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "scheduler", 00:04:36.847 "config": [ 00:04:36.847 { 00:04:36.847 "method": "framework_set_scheduler", 00:04:36.847 "params": { 00:04:36.847 "name": "static" 00:04:36.847 } 00:04:36.847 } 00:04:36.847 ] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "vhost_scsi", 00:04:36.847 "config": [] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "vhost_blk", 00:04:36.847 "config": [] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "ublk", 00:04:36.847 "config": [] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "nbd", 00:04:36.847 "config": [] 00:04:36.847 }, 00:04:36.847 { 00:04:36.847 "subsystem": "nvmf", 00:04:36.847 "config": [ 00:04:36.848 { 00:04:36.848 "method": "nvmf_set_config", 00:04:36.848 "params": { 00:04:36.848 "discovery_filter": "match_any", 00:04:36.848 "admin_cmd_passthru": { 00:04:36.848 "identify_ctrlr": false 00:04:36.848 }, 00:04:36.848 "dhchap_digests": [ 00:04:36.848 "sha256", 00:04:36.848 "sha384", 00:04:36.848 "sha512" 00:04:36.848 ], 00:04:36.848 "dhchap_dhgroups": [ 00:04:36.848 "null", 00:04:36.848 "ffdhe2048", 00:04:36.848 "ffdhe3072", 00:04:36.848 "ffdhe4096", 00:04:36.848 "ffdhe6144", 00:04:36.848 "ffdhe8192" 00:04:36.848 ] 00:04:36.848 } 00:04:36.848 }, 00:04:36.848 { 00:04:36.848 "method": "nvmf_set_max_subsystems", 00:04:36.848 "params": { 00:04:36.848 "max_subsystems": 1024 00:04:36.848 } 00:04:36.848 }, 00:04:36.848 { 00:04:36.848 "method": "nvmf_set_crdt", 00:04:36.848 "params": { 00:04:36.848 "crdt1": 0, 00:04:36.848 "crdt2": 0, 00:04:36.848 "crdt3": 0 00:04:36.848 } 00:04:36.848 }, 00:04:36.848 { 00:04:36.848 "method": "nvmf_create_transport", 00:04:36.848 "params": { 00:04:36.848 "trtype": "TCP", 00:04:36.848 "max_queue_depth": 128, 00:04:36.848 "max_io_qpairs_per_ctrlr": 127, 00:04:36.848 "in_capsule_data_size": 4096, 00:04:36.848 "max_io_size": 131072, 00:04:36.848 "io_unit_size": 131072, 00:04:36.848 "max_aq_depth": 128, 00:04:36.848 "num_shared_buffers": 511, 00:04:36.848 "buf_cache_size": 4294967295, 00:04:36.848 "dif_insert_or_strip": false, 00:04:36.848 "zcopy": false, 00:04:36.848 "c2h_success": true, 00:04:36.848 "sock_priority": 0, 00:04:36.848 "abort_timeout_sec": 1, 00:04:36.848 "ack_timeout": 0, 00:04:36.848 "data_wr_pool_size": 0 00:04:36.848 } 00:04:36.848 } 00:04:36.848 ] 00:04:36.848 }, 00:04:36.848 { 00:04:36.848 "subsystem": "iscsi", 00:04:36.848 "config": [ 00:04:36.848 { 00:04:36.848 "method": "iscsi_set_options", 00:04:36.848 "params": { 00:04:36.848 "node_base": "iqn.2016-06.io.spdk", 00:04:36.848 "max_sessions": 128, 00:04:36.848 "max_connections_per_session": 2, 00:04:36.848 "max_queue_depth": 64, 00:04:36.848 "default_time2wait": 2, 00:04:36.848 "default_time2retain": 20, 00:04:36.848 "first_burst_length": 8192, 00:04:36.848 "immediate_data": true, 00:04:36.848 "allow_duplicated_isid": false, 00:04:36.848 "error_recovery_level": 0, 00:04:36.848 "nop_timeout": 60, 00:04:36.848 "nop_in_interval": 30, 00:04:36.848 "disable_chap": false, 00:04:36.848 "require_chap": false, 00:04:36.848 "mutual_chap": false, 00:04:36.848 "chap_group": 0, 00:04:36.848 "max_large_datain_per_connection": 64, 00:04:36.848 "max_r2t_per_connection": 4, 00:04:36.848 "pdu_pool_size": 36864, 00:04:36.848 "immediate_data_pool_size": 16384, 00:04:36.848 "data_out_pool_size": 2048 00:04:36.848 } 00:04:36.848 } 00:04:36.848 ] 00:04:36.848 } 00:04:36.848 ] 00:04:36.848 } 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1749153 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1749153 ']' 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1749153 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749153 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749153' 00:04:36.848 killing process with pid 1749153 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1749153 00:04:36.848 18:03:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1749153 00:04:37.110 18:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1749489 00:04:37.110 18:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:37.110 18:03:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1749489 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1749489 ']' 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1749489 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749489 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749489' 00:04:42.399 killing process with pid 1749489 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1749489 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1749489 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:42.399 00:04:42.399 real 0m6.568s 00:04:42.399 user 0m6.495s 00:04:42.399 sys 0m0.556s 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.399 ************************************ 00:04:42.399 END TEST skip_rpc_with_json 00:04:42.399 ************************************ 00:04:42.399 18:03:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:42.399 18:03:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.399 18:03:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.399 18:03:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.399 ************************************ 00:04:42.399 START TEST skip_rpc_with_delay 00:04:42.399 ************************************ 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.399 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.400 [2024-11-19 18:03:43.834218] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.400 00:04:42.400 real 0m0.081s 00:04:42.400 user 0m0.054s 00:04:42.400 sys 0m0.026s 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.400 18:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:42.400 ************************************ 00:04:42.400 END TEST skip_rpc_with_delay 00:04:42.400 ************************************ 00:04:42.663 18:03:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:42.663 18:03:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:42.663 18:03:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:42.663 18:03:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.663 18:03:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.663 18:03:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.663 ************************************ 00:04:42.663 START TEST exit_on_failed_rpc_init 00:04:42.663 ************************************ 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1750566 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1750566 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1750566 ']' 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.663 18:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.663 [2024-11-19 18:03:44.007555] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:42.663 [2024-11-19 18:03:44.007615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750566 ] 00:04:42.663 [2024-11-19 18:03:44.095980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.663 [2024-11-19 18:03:44.130695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:43.606 18:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.606 [2024-11-19 18:03:44.849021] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:43.606 [2024-11-19 18:03:44.849073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750708 ] 00:04:43.606 [2024-11-19 18:03:44.936376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.606 [2024-11-19 18:03:44.973077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.606 [2024-11-19 18:03:44.973126] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:43.606 [2024-11-19 18:03:44.973136] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:43.606 [2024-11-19 18:03:44.973143] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1750566 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1750566 ']' 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1750566 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.606 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750566 00:04:43.868 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.868 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.868 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750566' 00:04:43.868 killing process with pid 1750566 00:04:43.868 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1750566 00:04:43.868 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1750566 00:04:43.868 00:04:43.868 real 0m1.319s 00:04:43.868 user 0m1.558s 00:04:43.868 sys 0m0.374s 00:04:43.868 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.868 18:03:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 ************************************ 00:04:43.868 END TEST exit_on_failed_rpc_init 00:04:43.868 ************************************ 00:04:43.868 18:03:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.868 00:04:43.868 real 0m13.761s 00:04:43.868 user 0m13.363s 00:04:43.868 sys 0m1.573s 00:04:43.868 18:03:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.868 18:03:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.868 ************************************ 00:04:43.868 END TEST skip_rpc 00:04:43.868 ************************************ 00:04:43.868 18:03:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.868 18:03:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.868 18:03:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.868 18:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:44.131 ************************************ 00:04:44.131 START TEST rpc_client 00:04:44.131 ************************************ 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:44.131 * Looking for test storage... 00:04:44.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.131 18:03:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.131 --rc genhtml_branch_coverage=1 00:04:44.131 --rc genhtml_function_coverage=1 00:04:44.131 --rc genhtml_legend=1 00:04:44.131 --rc geninfo_all_blocks=1 00:04:44.131 --rc geninfo_unexecuted_blocks=1 00:04:44.131 00:04:44.131 ' 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.131 --rc genhtml_branch_coverage=1 00:04:44.131 --rc genhtml_function_coverage=1 00:04:44.131 --rc genhtml_legend=1 00:04:44.131 --rc geninfo_all_blocks=1 00:04:44.131 --rc geninfo_unexecuted_blocks=1 00:04:44.131 00:04:44.131 ' 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.131 --rc genhtml_branch_coverage=1 00:04:44.131 --rc genhtml_function_coverage=1 00:04:44.131 --rc genhtml_legend=1 00:04:44.131 --rc geninfo_all_blocks=1 00:04:44.131 --rc geninfo_unexecuted_blocks=1 00:04:44.131 00:04:44.131 ' 00:04:44.131 18:03:45 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.131 --rc genhtml_branch_coverage=1 00:04:44.131 --rc genhtml_function_coverage=1 00:04:44.131 --rc genhtml_legend=1 00:04:44.131 --rc geninfo_all_blocks=1 00:04:44.131 --rc geninfo_unexecuted_blocks=1 00:04:44.131 00:04:44.131 ' 00:04:44.131 18:03:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:44.131 OK 00:04:44.131 18:03:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.392 00:04:44.392 real 0m0.225s 00:04:44.392 user 0m0.144s 00:04:44.392 sys 0m0.093s 00:04:44.392 18:03:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.392 18:03:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:44.392 ************************************ 00:04:44.392 END TEST rpc_client 00:04:44.392 ************************************ 00:04:44.392 18:03:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:44.392 18:03:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.392 18:03:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.392 18:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:44.392 ************************************ 00:04:44.392 START TEST json_config 00:04:44.392 ************************************ 00:04:44.392 18:03:45 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:44.392 18:03:45 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.392 18:03:45 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.392 18:03:45 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.392 18:03:45 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.392 18:03:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.392 18:03:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.392 18:03:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.392 18:03:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.392 18:03:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.392 18:03:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.393 18:03:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.393 18:03:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.393 18:03:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.393 18:03:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.393 18:03:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.393 18:03:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:44.393 18:03:45 json_config -- scripts/common.sh@345 -- # : 1 00:04:44.393 18:03:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.393 18:03:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.393 18:03:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:44.393 18:03:45 json_config -- scripts/common.sh@353 -- # local d=1 00:04:44.393 18:03:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.393 18:03:45 json_config -- scripts/common.sh@355 -- # echo 1 00:04:44.393 18:03:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.393 18:03:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:44.393 18:03:45 json_config -- scripts/common.sh@353 -- # local d=2 00:04:44.393 18:03:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.393 18:03:45 json_config -- scripts/common.sh@355 -- # echo 2 00:04:44.393 18:03:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.393 18:03:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.393 18:03:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.393 18:03:45 json_config -- scripts/common.sh@368 -- # return 0 00:04:44.393 18:03:45 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.393 18:03:45 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.393 --rc genhtml_branch_coverage=1 00:04:44.393 --rc genhtml_function_coverage=1 00:04:44.393 --rc genhtml_legend=1 00:04:44.393 --rc geninfo_all_blocks=1 00:04:44.393 --rc geninfo_unexecuted_blocks=1 00:04:44.393 00:04:44.393 ' 00:04:44.393 18:03:45 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.393 --rc genhtml_branch_coverage=1 00:04:44.393 --rc genhtml_function_coverage=1 00:04:44.393 --rc genhtml_legend=1 00:04:44.393 --rc geninfo_all_blocks=1 00:04:44.393 --rc geninfo_unexecuted_blocks=1 00:04:44.393 00:04:44.393 ' 00:04:44.393 18:03:45 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.393 --rc genhtml_branch_coverage=1 00:04:44.393 --rc genhtml_function_coverage=1 00:04:44.393 --rc genhtml_legend=1 00:04:44.393 --rc geninfo_all_blocks=1 00:04:44.393 --rc geninfo_unexecuted_blocks=1 00:04:44.393 00:04:44.393 ' 00:04:44.393 18:03:45 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.393 --rc genhtml_branch_coverage=1 00:04:44.393 --rc genhtml_function_coverage=1 00:04:44.393 --rc genhtml_legend=1 00:04:44.393 --rc geninfo_all_blocks=1 00:04:44.393 --rc geninfo_unexecuted_blocks=1 00:04:44.393 00:04:44.393 ' 00:04:44.393 18:03:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:44.393 18:03:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.393 18:03:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.393 18:03:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.393 18:03:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.393 18:03:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.393 18:03:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.393 18:03:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.393 18:03:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:44.655 18:03:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.655 18:03:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.655 18:03:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.655 18:03:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.655 18:03:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.655 18:03:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.655 18:03:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.655 18:03:45 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.655 18:03:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@51 -- # : 0 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.655 18:03:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:44.655 INFO: JSON configuration test init 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:44.655 18:03:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.655 18:03:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:44.655 18:03:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.655 18:03:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.655 18:03:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:44.655 18:03:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:44.655 18:03:45 json_config -- json_config/common.sh@10 -- # shift 00:04:44.656 18:03:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.656 18:03:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.656 18:03:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.656 18:03:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.656 18:03:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.656 18:03:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1751029 00:04:44.656 18:03:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.656 Waiting for target to run... 00:04:44.656 18:03:45 json_config -- json_config/common.sh@25 -- # waitforlisten 1751029 /var/tmp/spdk_tgt.sock 00:04:44.656 18:03:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 1751029 ']' 00:04:44.656 18:03:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.656 18:03:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.656 18:03:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:44.656 18:03:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.656 18:03:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.656 18:03:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.656 [2024-11-19 18:03:45.976542] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:44.656 [2024-11-19 18:03:45.976614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751029 ] 00:04:44.917 [2024-11-19 18:03:46.314091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.917 [2024-11-19 18:03:46.342370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.489 18:03:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.489 18:03:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:45.489 18:03:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:45.489 00:04:45.489 18:03:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:45.489 18:03:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:45.489 18:03:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.489 18:03:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.489 18:03:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:45.489 18:03:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:45.489 18:03:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.489 18:03:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.489 18:03:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:45.489 18:03:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:45.490 18:03:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:46.061 18:03:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.061 18:03:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:46.061 18:03:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@54 -- # sort 00:04:46.061 18:03:47 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:46.062 18:03:47 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:46.062 18:03:47 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:46.062 18:03:47 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:46.062 18:03:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.062 18:03:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:46.322 18:03:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.322 18:03:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.322 18:03:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.322 MallocForNvmf0 00:04:46.322 18:03:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.322 18:03:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.583 MallocForNvmf1 00:04:46.583 18:03:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.583 18:03:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.844 [2024-11-19 18:03:48.085253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.844 18:03:48 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.844 18:03:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.844 18:03:48 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.844 18:03:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.105 18:03:48 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.105 18:03:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.365 18:03:48 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.365 18:03:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.365 [2024-11-19 18:03:48.807425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.625 18:03:48 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:47.625 18:03:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.625 18:03:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.625 18:03:48 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:47.625 18:03:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.625 18:03:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.625 18:03:48 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:47.625 18:03:48 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.625 18:03:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.625 MallocBdevForConfigChangeCheck 00:04:47.625 18:03:49 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:47.625 18:03:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.625 18:03:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.885 18:03:49 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:47.885 18:03:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.145 18:03:49 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:48.145 INFO: shutting down applications... 00:04:48.145 18:03:49 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:48.145 18:03:49 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:48.145 18:03:49 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:48.145 18:03:49 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:48.406 Calling clear_iscsi_subsystem 00:04:48.406 Calling clear_nvmf_subsystem 00:04:48.406 Calling clear_nbd_subsystem 00:04:48.406 Calling clear_ublk_subsystem 00:04:48.406 Calling clear_vhost_blk_subsystem 00:04:48.406 Calling clear_vhost_scsi_subsystem 00:04:48.406 Calling clear_bdev_subsystem 00:04:48.667 18:03:49 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:48.667 18:03:49 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:48.667 18:03:49 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:48.667 18:03:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.667 18:03:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:48.667 18:03:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:48.928 18:03:50 json_config -- json_config/json_config.sh@352 -- # break 00:04:48.928 18:03:50 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:48.928 18:03:50 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:48.928 18:03:50 json_config -- json_config/common.sh@31 -- # local app=target 00:04:48.928 18:03:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:48.928 18:03:50 json_config -- json_config/common.sh@35 -- # [[ -n 1751029 ]] 00:04:48.928 18:03:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1751029 00:04:48.928 18:03:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:48.928 18:03:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.928 18:03:50 json_config -- json_config/common.sh@41 -- # kill -0 1751029 00:04:48.928 18:03:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.500 18:03:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.500 18:03:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.500 18:03:50 json_config -- json_config/common.sh@41 -- # kill -0 1751029 00:04:49.500 18:03:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:49.500 18:03:50 json_config -- json_config/common.sh@43 -- # break 00:04:49.500 18:03:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:49.500 18:03:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:49.500 SPDK target shutdown done 00:04:49.500 18:03:50 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:49.500 INFO: relaunching applications... 00:04:49.500 18:03:50 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.500 18:03:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:49.500 18:03:50 json_config -- json_config/common.sh@10 -- # shift 00:04:49.500 18:03:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.500 18:03:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.500 18:03:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.500 18:03:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.500 18:03:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.500 18:03:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1752167 00:04:49.500 18:03:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.500 Waiting for target to run... 00:04:49.500 18:03:50 json_config -- json_config/common.sh@25 -- # waitforlisten 1752167 /var/tmp/spdk_tgt.sock 00:04:49.500 18:03:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.500 18:03:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 1752167 ']' 00:04:49.500 18:03:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.500 18:03:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.500 18:03:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.500 18:03:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.500 18:03:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.500 [2024-11-19 18:03:50.803528] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:49.500 [2024-11-19 18:03:50.803586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752167 ] 00:04:49.761 [2024-11-19 18:03:51.162975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.761 [2024-11-19 18:03:51.192849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.331 [2024-11-19 18:03:51.692166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.331 [2024-11-19 18:03:51.724542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:50.331 18:03:51 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.331 18:03:51 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:50.331 18:03:51 json_config -- json_config/common.sh@26 -- # echo '' 00:04:50.331 00:04:50.331 18:03:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:50.331 18:03:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:50.331 INFO: Checking if target configuration is the same... 00:04:50.331 18:03:51 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.331 18:03:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:50.331 18:03:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.331 + '[' 2 -ne 2 ']' 00:04:50.331 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:50.331 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:50.331 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:50.331 +++ basename /dev/fd/62 00:04:50.331 ++ mktemp /tmp/62.XXX 00:04:50.331 + tmp_file_1=/tmp/62.yxl 00:04:50.332 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.332 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.332 + tmp_file_2=/tmp/spdk_tgt_config.json.MX2 00:04:50.332 + ret=0 00:04:50.332 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:50.902 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:50.902 + diff -u /tmp/62.yxl /tmp/spdk_tgt_config.json.MX2 00:04:50.902 + echo 'INFO: JSON config files are the same' 00:04:50.902 INFO: JSON config files are the same 00:04:50.902 + rm /tmp/62.yxl /tmp/spdk_tgt_config.json.MX2 00:04:50.902 + exit 0 00:04:50.902 18:03:52 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:50.903 18:03:52 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:50.903 INFO: changing configuration and checking if this can be detected... 00:04:50.903 18:03:52 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.903 18:03:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.903 18:03:52 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:50.903 18:03:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.903 18:03:52 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.903 + '[' 2 -ne 2 ']' 00:04:50.903 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:50.903 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:50.903 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:50.903 +++ basename /dev/fd/62 00:04:50.903 ++ mktemp /tmp/62.XXX 00:04:50.903 + tmp_file_1=/tmp/62.zY2 00:04:50.903 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.903 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.903 + tmp_file_2=/tmp/spdk_tgt_config.json.H2E 00:04:50.903 + ret=0 00:04:50.903 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.474 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.474 + diff -u /tmp/62.zY2 /tmp/spdk_tgt_config.json.H2E 00:04:51.474 + ret=1 00:04:51.474 + echo '=== Start of file: /tmp/62.zY2 ===' 00:04:51.474 + cat /tmp/62.zY2 00:04:51.474 + echo '=== End of file: /tmp/62.zY2 ===' 00:04:51.474 + echo '' 00:04:51.474 + echo '=== Start of file: /tmp/spdk_tgt_config.json.H2E ===' 00:04:51.474 + cat /tmp/spdk_tgt_config.json.H2E 00:04:51.474 + echo '=== End of file: /tmp/spdk_tgt_config.json.H2E ===' 00:04:51.474 + echo '' 00:04:51.474 + rm /tmp/62.zY2 /tmp/spdk_tgt_config.json.H2E 00:04:51.474 + exit 1 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:51.474 INFO: configuration change detected. 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 1752167 ]] 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.474 18:03:52 json_config -- json_config/json_config.sh@330 -- # killprocess 1752167 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@954 -- # '[' -z 1752167 ']' 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@958 -- # kill -0 1752167 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@959 -- # uname 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752167 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752167' 00:04:51.474 killing process with pid 1752167 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@973 -- # kill 1752167 00:04:51.474 18:03:52 json_config -- common/autotest_common.sh@978 -- # wait 1752167 00:04:51.735 18:03:53 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.735 18:03:53 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:51.735 18:03:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.735 18:03:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.735 18:03:53 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:51.735 18:03:53 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:51.735 INFO: Success 00:04:51.735 00:04:51.735 real 0m7.481s 00:04:51.735 user 0m8.999s 00:04:51.735 sys 0m2.035s 00:04:51.735 18:03:53 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.735 18:03:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.735 ************************************ 00:04:51.735 END TEST json_config 00:04:51.735 ************************************ 00:04:51.735 18:03:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:51.735 18:03:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.735 18:03:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.735 18:03:53 -- common/autotest_common.sh@10 -- # set +x 00:04:51.996 ************************************ 00:04:51.996 START TEST json_config_extra_key 00:04:51.996 ************************************ 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.996 18:03:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.996 --rc genhtml_branch_coverage=1 00:04:51.996 --rc genhtml_function_coverage=1 00:04:51.996 --rc genhtml_legend=1 00:04:51.996 --rc geninfo_all_blocks=1 00:04:51.996 --rc geninfo_unexecuted_blocks=1 00:04:51.996 00:04:51.996 ' 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.996 --rc genhtml_branch_coverage=1 00:04:51.996 --rc genhtml_function_coverage=1 00:04:51.996 --rc genhtml_legend=1 00:04:51.996 --rc geninfo_all_blocks=1 00:04:51.996 --rc geninfo_unexecuted_blocks=1 00:04:51.996 00:04:51.996 ' 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.996 --rc genhtml_branch_coverage=1 00:04:51.996 --rc genhtml_function_coverage=1 00:04:51.996 --rc genhtml_legend=1 00:04:51.996 --rc geninfo_all_blocks=1 00:04:51.996 --rc geninfo_unexecuted_blocks=1 00:04:51.996 00:04:51.996 ' 00:04:51.996 18:03:53 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.996 --rc genhtml_branch_coverage=1 00:04:51.996 --rc genhtml_function_coverage=1 00:04:51.996 --rc genhtml_legend=1 00:04:51.996 --rc geninfo_all_blocks=1 00:04:51.996 --rc geninfo_unexecuted_blocks=1 00:04:51.996 00:04:51.996 ' 00:04:51.996 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.996 18:03:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.997 18:03:53 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.997 18:03:53 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.997 18:03:53 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.997 18:03:53 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.997 18:03:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.997 18:03:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.997 18:03:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.997 18:03:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.997 18:03:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.997 18:03:53 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.997 INFO: launching applications... 00:04:51.997 18:03:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1752860 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.997 Waiting for target to run... 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1752860 /var/tmp/spdk_tgt.sock 00:04:51.997 18:03:53 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1752860 ']' 00:04:51.997 18:03:53 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.997 18:03:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.997 18:03:53 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.997 18:03:53 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.997 18:03:53 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.997 18:03:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.258 [2024-11-19 18:03:53.510042] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:52.258 [2024-11-19 18:03:53.510116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752860 ] 00:04:52.519 [2024-11-19 18:03:53.862684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.519 [2024-11-19 18:03:53.887579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.089 18:03:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.089 18:03:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:53.089 18:03:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:53.089 00:04:53.089 18:03:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:53.089 INFO: shutting down applications... 00:04:53.089 18:03:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:53.089 18:03:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:53.089 18:03:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.089 18:03:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1752860 ]] 00:04:53.089 18:03:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1752860 00:04:53.089 18:03:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.089 18:03:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.089 18:03:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1752860 00:04:53.089 18:03:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.661 18:03:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.661 18:03:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.661 18:03:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1752860 00:04:53.661 18:03:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.661 18:03:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.661 18:03:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.661 18:03:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.661 SPDK target shutdown done 00:04:53.661 18:03:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.661 Success 00:04:53.661 00:04:53.661 real 0m1.587s 00:04:53.661 user 0m1.154s 00:04:53.661 sys 0m0.472s 00:04:53.661 18:03:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.661 18:03:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.661 ************************************ 00:04:53.661 END TEST json_config_extra_key 00:04:53.661 ************************************ 00:04:53.661 18:03:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.661 18:03:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.661 18:03:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.661 18:03:54 -- common/autotest_common.sh@10 -- # set +x 00:04:53.661 ************************************ 00:04:53.661 START TEST alias_rpc 00:04:53.661 ************************************ 00:04:53.661 18:03:54 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.661 * Looking for test storage... 00:04:53.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:53.661 18:03:55 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.661 18:03:55 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.661 18:03:55 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.661 18:03:55 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.661 18:03:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.661 18:03:55 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.661 18:03:55 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.661 --rc genhtml_branch_coverage=1 00:04:53.661 --rc genhtml_function_coverage=1 00:04:53.661 --rc genhtml_legend=1 00:04:53.661 --rc geninfo_all_blocks=1 00:04:53.661 --rc geninfo_unexecuted_blocks=1 00:04:53.661 00:04:53.661 ' 00:04:53.661 18:03:55 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.661 --rc genhtml_branch_coverage=1 00:04:53.661 --rc genhtml_function_coverage=1 00:04:53.661 --rc genhtml_legend=1 00:04:53.661 --rc geninfo_all_blocks=1 00:04:53.661 --rc geninfo_unexecuted_blocks=1 00:04:53.661 00:04:53.661 ' 00:04:53.661 18:03:55 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.661 --rc genhtml_branch_coverage=1 00:04:53.661 --rc genhtml_function_coverage=1 00:04:53.661 --rc genhtml_legend=1 00:04:53.661 --rc geninfo_all_blocks=1 00:04:53.662 --rc geninfo_unexecuted_blocks=1 00:04:53.662 00:04:53.662 ' 00:04:53.662 18:03:55 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.662 --rc genhtml_branch_coverage=1 00:04:53.662 --rc genhtml_function_coverage=1 00:04:53.662 --rc genhtml_legend=1 00:04:53.662 --rc geninfo_all_blocks=1 00:04:53.662 --rc geninfo_unexecuted_blocks=1 00:04:53.662 00:04:53.662 ' 00:04:53.662 18:03:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.662 18:03:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1753215 00:04:53.662 18:03:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1753215 00:04:53.662 18:03:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.662 18:03:55 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1753215 ']' 00:04:53.662 18:03:55 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.662 18:03:55 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.662 18:03:55 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.662 18:03:55 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.662 18:03:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.923 [2024-11-19 18:03:55.170064] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:53.923 [2024-11-19 18:03:55.170140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753215 ] 00:04:53.923 [2024-11-19 18:03:55.256731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.923 [2024-11-19 18:03:55.296633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.865 18:03:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.865 18:03:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:54.865 18:03:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:54.865 18:03:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1753215 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1753215 ']' 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1753215 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1753215 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1753215' 00:04:54.865 killing process with pid 1753215 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@973 -- # kill 1753215 00:04:54.865 18:03:56 alias_rpc -- common/autotest_common.sh@978 -- # wait 1753215 00:04:55.126 00:04:55.126 real 0m1.520s 00:04:55.126 user 0m1.674s 00:04:55.126 sys 0m0.436s 00:04:55.126 18:03:56 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.126 18:03:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.126 ************************************ 00:04:55.126 END TEST alias_rpc 00:04:55.126 ************************************ 00:04:55.126 18:03:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:55.126 18:03:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:55.126 18:03:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.127 18:03:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.127 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.127 ************************************ 00:04:55.127 START TEST spdkcli_tcp 00:04:55.127 ************************************ 00:04:55.127 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:55.388 * Looking for test storage... 00:04:55.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:55.388 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.388 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.388 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.388 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.388 18:03:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:55.388 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.388 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.388 --rc genhtml_branch_coverage=1 00:04:55.388 --rc genhtml_function_coverage=1 00:04:55.388 --rc genhtml_legend=1 00:04:55.388 --rc geninfo_all_blocks=1 00:04:55.388 --rc geninfo_unexecuted_blocks=1 00:04:55.388 00:04:55.388 ' 00:04:55.388 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.388 --rc genhtml_branch_coverage=1 00:04:55.388 --rc genhtml_function_coverage=1 00:04:55.388 --rc genhtml_legend=1 00:04:55.388 --rc geninfo_all_blocks=1 00:04:55.388 --rc geninfo_unexecuted_blocks=1 00:04:55.388 00:04:55.388 ' 00:04:55.388 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.388 --rc genhtml_branch_coverage=1 00:04:55.388 --rc genhtml_function_coverage=1 00:04:55.388 --rc genhtml_legend=1 00:04:55.388 --rc geninfo_all_blocks=1 00:04:55.388 --rc geninfo_unexecuted_blocks=1 00:04:55.388 00:04:55.388 ' 00:04:55.388 18:03:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.388 --rc genhtml_branch_coverage=1 00:04:55.388 --rc genhtml_function_coverage=1 00:04:55.388 --rc genhtml_legend=1 00:04:55.388 --rc geninfo_all_blocks=1 00:04:55.388 --rc geninfo_unexecuted_blocks=1 00:04:55.388 00:04:55.388 ' 00:04:55.388 18:03:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:55.388 18:03:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:55.388 18:03:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:55.388 18:03:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:55.388 18:03:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:55.389 18:03:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:55.389 18:03:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:55.389 18:03:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.389 18:03:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.389 18:03:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1753571 00:04:55.389 18:03:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1753571 00:04:55.389 18:03:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:55.389 18:03:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1753571 ']' 00:04:55.389 18:03:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.389 18:03:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.389 18:03:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.389 18:03:56 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.389 18:03:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.389 [2024-11-19 18:03:56.789039] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:55.389 [2024-11-19 18:03:56.789118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753571 ] 00:04:55.649 [2024-11-19 18:03:56.878546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.650 [2024-11-19 18:03:56.914631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.650 [2024-11-19 18:03:56.914632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.220 18:03:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.220 18:03:57 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:56.220 18:03:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1753766 00:04:56.220 18:03:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.220 18:03:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.482 [ 00:04:56.482 "bdev_malloc_delete", 00:04:56.482 "bdev_malloc_create", 00:04:56.482 "bdev_null_resize", 00:04:56.482 "bdev_null_delete", 00:04:56.482 "bdev_null_create", 00:04:56.482 "bdev_nvme_cuse_unregister", 00:04:56.482 "bdev_nvme_cuse_register", 00:04:56.482 "bdev_opal_new_user", 00:04:56.482 "bdev_opal_set_lock_state", 00:04:56.482 "bdev_opal_delete", 00:04:56.482 "bdev_opal_get_info", 00:04:56.482 "bdev_opal_create", 00:04:56.482 "bdev_nvme_opal_revert", 00:04:56.482 "bdev_nvme_opal_init", 00:04:56.482 "bdev_nvme_send_cmd", 00:04:56.482 "bdev_nvme_set_keys", 00:04:56.482 "bdev_nvme_get_path_iostat", 00:04:56.482 "bdev_nvme_get_mdns_discovery_info", 00:04:56.482 "bdev_nvme_stop_mdns_discovery", 00:04:56.482 "bdev_nvme_start_mdns_discovery", 00:04:56.482 "bdev_nvme_set_multipath_policy", 00:04:56.482 "bdev_nvme_set_preferred_path", 00:04:56.482 "bdev_nvme_get_io_paths", 00:04:56.482 "bdev_nvme_remove_error_injection", 00:04:56.482 "bdev_nvme_add_error_injection", 00:04:56.482 "bdev_nvme_get_discovery_info", 00:04:56.482 "bdev_nvme_stop_discovery", 00:04:56.482 "bdev_nvme_start_discovery", 00:04:56.482 "bdev_nvme_get_controller_health_info", 00:04:56.482 "bdev_nvme_disable_controller", 00:04:56.482 "bdev_nvme_enable_controller", 00:04:56.482 "bdev_nvme_reset_controller", 00:04:56.482 "bdev_nvme_get_transport_statistics", 00:04:56.482 "bdev_nvme_apply_firmware", 00:04:56.482 "bdev_nvme_detach_controller", 00:04:56.482 "bdev_nvme_get_controllers", 00:04:56.482 "bdev_nvme_attach_controller", 00:04:56.482 "bdev_nvme_set_hotplug", 00:04:56.482 "bdev_nvme_set_options", 00:04:56.482 "bdev_passthru_delete", 00:04:56.482 "bdev_passthru_create", 00:04:56.482 "bdev_lvol_set_parent_bdev", 00:04:56.482 "bdev_lvol_set_parent", 00:04:56.482 "bdev_lvol_check_shallow_copy", 00:04:56.482 "bdev_lvol_start_shallow_copy", 00:04:56.482 "bdev_lvol_grow_lvstore", 00:04:56.482 "bdev_lvol_get_lvols", 00:04:56.482 "bdev_lvol_get_lvstores", 00:04:56.482 "bdev_lvol_delete", 00:04:56.482 "bdev_lvol_set_read_only", 00:04:56.482 "bdev_lvol_resize", 00:04:56.482 "bdev_lvol_decouple_parent", 00:04:56.482 "bdev_lvol_inflate", 00:04:56.482 "bdev_lvol_rename", 00:04:56.482 "bdev_lvol_clone_bdev", 00:04:56.482 "bdev_lvol_clone", 00:04:56.482 "bdev_lvol_snapshot", 00:04:56.482 "bdev_lvol_create", 00:04:56.482 "bdev_lvol_delete_lvstore", 00:04:56.482 "bdev_lvol_rename_lvstore", 00:04:56.482 "bdev_lvol_create_lvstore", 00:04:56.482 "bdev_raid_set_options", 00:04:56.482 "bdev_raid_remove_base_bdev", 00:04:56.482 "bdev_raid_add_base_bdev", 00:04:56.482 "bdev_raid_delete", 00:04:56.482 "bdev_raid_create", 00:04:56.482 "bdev_raid_get_bdevs", 00:04:56.482 "bdev_error_inject_error", 00:04:56.482 "bdev_error_delete", 00:04:56.482 "bdev_error_create", 00:04:56.482 "bdev_split_delete", 00:04:56.482 "bdev_split_create", 00:04:56.482 "bdev_delay_delete", 00:04:56.482 "bdev_delay_create", 00:04:56.482 "bdev_delay_update_latency", 00:04:56.482 "bdev_zone_block_delete", 00:04:56.482 "bdev_zone_block_create", 00:04:56.482 "blobfs_create", 00:04:56.482 "blobfs_detect", 00:04:56.482 "blobfs_set_cache_size", 00:04:56.482 "bdev_aio_delete", 00:04:56.482 "bdev_aio_rescan", 00:04:56.482 "bdev_aio_create", 00:04:56.482 "bdev_ftl_set_property", 00:04:56.482 "bdev_ftl_get_properties", 00:04:56.482 "bdev_ftl_get_stats", 00:04:56.482 "bdev_ftl_unmap", 00:04:56.482 "bdev_ftl_unload", 00:04:56.482 "bdev_ftl_delete", 00:04:56.482 "bdev_ftl_load", 00:04:56.482 "bdev_ftl_create", 00:04:56.482 "bdev_virtio_attach_controller", 00:04:56.483 "bdev_virtio_scsi_get_devices", 00:04:56.483 "bdev_virtio_detach_controller", 00:04:56.483 "bdev_virtio_blk_set_hotplug", 00:04:56.483 "bdev_iscsi_delete", 00:04:56.483 "bdev_iscsi_create", 00:04:56.483 "bdev_iscsi_set_options", 00:04:56.483 "accel_error_inject_error", 00:04:56.483 "ioat_scan_accel_module", 00:04:56.483 "dsa_scan_accel_module", 00:04:56.483 "iaa_scan_accel_module", 00:04:56.483 "vfu_virtio_create_fs_endpoint", 00:04:56.483 "vfu_virtio_create_scsi_endpoint", 00:04:56.483 "vfu_virtio_scsi_remove_target", 00:04:56.483 "vfu_virtio_scsi_add_target", 00:04:56.483 "vfu_virtio_create_blk_endpoint", 00:04:56.483 "vfu_virtio_delete_endpoint", 00:04:56.483 "keyring_file_remove_key", 00:04:56.483 "keyring_file_add_key", 00:04:56.483 "keyring_linux_set_options", 00:04:56.483 "fsdev_aio_delete", 00:04:56.483 "fsdev_aio_create", 00:04:56.483 "iscsi_get_histogram", 00:04:56.483 "iscsi_enable_histogram", 00:04:56.483 "iscsi_set_options", 00:04:56.483 "iscsi_get_auth_groups", 00:04:56.483 "iscsi_auth_group_remove_secret", 00:04:56.483 "iscsi_auth_group_add_secret", 00:04:56.483 "iscsi_delete_auth_group", 00:04:56.483 "iscsi_create_auth_group", 00:04:56.483 "iscsi_set_discovery_auth", 00:04:56.483 "iscsi_get_options", 00:04:56.483 "iscsi_target_node_request_logout", 00:04:56.483 "iscsi_target_node_set_redirect", 00:04:56.483 "iscsi_target_node_set_auth", 00:04:56.483 "iscsi_target_node_add_lun", 00:04:56.483 "iscsi_get_stats", 00:04:56.483 "iscsi_get_connections", 00:04:56.483 "iscsi_portal_group_set_auth", 00:04:56.483 "iscsi_start_portal_group", 00:04:56.483 "iscsi_delete_portal_group", 00:04:56.483 "iscsi_create_portal_group", 00:04:56.483 "iscsi_get_portal_groups", 00:04:56.483 "iscsi_delete_target_node", 00:04:56.483 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.483 "iscsi_target_node_add_pg_ig_maps", 00:04:56.483 "iscsi_create_target_node", 00:04:56.483 "iscsi_get_target_nodes", 00:04:56.483 "iscsi_delete_initiator_group", 00:04:56.483 "iscsi_initiator_group_remove_initiators", 00:04:56.483 "iscsi_initiator_group_add_initiators", 00:04:56.483 "iscsi_create_initiator_group", 00:04:56.483 "iscsi_get_initiator_groups", 00:04:56.483 "nvmf_set_crdt", 00:04:56.483 "nvmf_set_config", 00:04:56.483 "nvmf_set_max_subsystems", 00:04:56.483 "nvmf_stop_mdns_prr", 00:04:56.483 "nvmf_publish_mdns_prr", 00:04:56.483 "nvmf_subsystem_get_listeners", 00:04:56.483 "nvmf_subsystem_get_qpairs", 00:04:56.483 "nvmf_subsystem_get_controllers", 00:04:56.483 "nvmf_get_stats", 00:04:56.483 "nvmf_get_transports", 00:04:56.483 "nvmf_create_transport", 00:04:56.483 "nvmf_get_targets", 00:04:56.483 "nvmf_delete_target", 00:04:56.483 "nvmf_create_target", 00:04:56.483 "nvmf_subsystem_allow_any_host", 00:04:56.483 "nvmf_subsystem_set_keys", 00:04:56.483 "nvmf_subsystem_remove_host", 00:04:56.483 "nvmf_subsystem_add_host", 00:04:56.483 "nvmf_ns_remove_host", 00:04:56.483 "nvmf_ns_add_host", 00:04:56.483 "nvmf_subsystem_remove_ns", 00:04:56.483 "nvmf_subsystem_set_ns_ana_group", 00:04:56.483 "nvmf_subsystem_add_ns", 00:04:56.483 "nvmf_subsystem_listener_set_ana_state", 00:04:56.483 "nvmf_discovery_get_referrals", 00:04:56.483 "nvmf_discovery_remove_referral", 00:04:56.483 "nvmf_discovery_add_referral", 00:04:56.483 "nvmf_subsystem_remove_listener", 00:04:56.483 "nvmf_subsystem_add_listener", 00:04:56.483 "nvmf_delete_subsystem", 00:04:56.483 "nvmf_create_subsystem", 00:04:56.483 "nvmf_get_subsystems", 00:04:56.483 "env_dpdk_get_mem_stats", 00:04:56.483 "nbd_get_disks", 00:04:56.483 "nbd_stop_disk", 00:04:56.483 "nbd_start_disk", 00:04:56.483 "ublk_recover_disk", 00:04:56.483 "ublk_get_disks", 00:04:56.483 "ublk_stop_disk", 00:04:56.483 "ublk_start_disk", 00:04:56.483 "ublk_destroy_target", 00:04:56.483 "ublk_create_target", 00:04:56.483 "virtio_blk_create_transport", 00:04:56.483 "virtio_blk_get_transports", 00:04:56.483 "vhost_controller_set_coalescing", 00:04:56.483 "vhost_get_controllers", 00:04:56.483 "vhost_delete_controller", 00:04:56.483 "vhost_create_blk_controller", 00:04:56.483 "vhost_scsi_controller_remove_target", 00:04:56.483 "vhost_scsi_controller_add_target", 00:04:56.483 "vhost_start_scsi_controller", 00:04:56.483 "vhost_create_scsi_controller", 00:04:56.483 "thread_set_cpumask", 00:04:56.483 "scheduler_set_options", 00:04:56.483 "framework_get_governor", 00:04:56.483 "framework_get_scheduler", 00:04:56.483 "framework_set_scheduler", 00:04:56.483 "framework_get_reactors", 00:04:56.483 "thread_get_io_channels", 00:04:56.483 "thread_get_pollers", 00:04:56.483 "thread_get_stats", 00:04:56.483 "framework_monitor_context_switch", 00:04:56.483 "spdk_kill_instance", 00:04:56.483 "log_enable_timestamps", 00:04:56.483 "log_get_flags", 00:04:56.483 "log_clear_flag", 00:04:56.483 "log_set_flag", 00:04:56.483 "log_get_level", 00:04:56.483 "log_set_level", 00:04:56.483 "log_get_print_level", 00:04:56.483 "log_set_print_level", 00:04:56.483 "framework_enable_cpumask_locks", 00:04:56.483 "framework_disable_cpumask_locks", 00:04:56.483 "framework_wait_init", 00:04:56.483 "framework_start_init", 00:04:56.483 "scsi_get_devices", 00:04:56.483 "bdev_get_histogram", 00:04:56.483 "bdev_enable_histogram", 00:04:56.483 "bdev_set_qos_limit", 00:04:56.483 "bdev_set_qd_sampling_period", 00:04:56.483 "bdev_get_bdevs", 00:04:56.483 "bdev_reset_iostat", 00:04:56.483 "bdev_get_iostat", 00:04:56.483 "bdev_examine", 00:04:56.483 "bdev_wait_for_examine", 00:04:56.483 "bdev_set_options", 00:04:56.483 "accel_get_stats", 00:04:56.483 "accel_set_options", 00:04:56.483 "accel_set_driver", 00:04:56.483 "accel_crypto_key_destroy", 00:04:56.483 "accel_crypto_keys_get", 00:04:56.483 "accel_crypto_key_create", 00:04:56.483 "accel_assign_opc", 00:04:56.483 "accel_get_module_info", 00:04:56.483 "accel_get_opc_assignments", 00:04:56.483 "vmd_rescan", 00:04:56.483 "vmd_remove_device", 00:04:56.483 "vmd_enable", 00:04:56.483 "sock_get_default_impl", 00:04:56.483 "sock_set_default_impl", 00:04:56.483 "sock_impl_set_options", 00:04:56.483 "sock_impl_get_options", 00:04:56.483 "iobuf_get_stats", 00:04:56.483 "iobuf_set_options", 00:04:56.483 "keyring_get_keys", 00:04:56.483 "vfu_tgt_set_base_path", 00:04:56.483 "framework_get_pci_devices", 00:04:56.483 "framework_get_config", 00:04:56.483 "framework_get_subsystems", 00:04:56.483 "fsdev_set_opts", 00:04:56.483 "fsdev_get_opts", 00:04:56.483 "trace_get_info", 00:04:56.483 "trace_get_tpoint_group_mask", 00:04:56.483 "trace_disable_tpoint_group", 00:04:56.483 "trace_enable_tpoint_group", 00:04:56.483 "trace_clear_tpoint_mask", 00:04:56.483 "trace_set_tpoint_mask", 00:04:56.483 "notify_get_notifications", 00:04:56.483 "notify_get_types", 00:04:56.483 "spdk_get_version", 00:04:56.483 "rpc_get_methods" 00:04:56.483 ] 00:04:56.483 18:03:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.483 18:03:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.483 18:03:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1753571 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1753571 ']' 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1753571 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1753571 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1753571' 00:04:56.483 killing process with pid 1753571 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1753571 00:04:56.483 18:03:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1753571 00:04:56.744 00:04:56.744 real 0m1.534s 00:04:56.744 user 0m2.786s 00:04:56.744 sys 0m0.457s 00:04:56.744 18:03:58 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.744 18:03:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.744 ************************************ 00:04:56.744 END TEST spdkcli_tcp 00:04:56.744 ************************************ 00:04:56.744 18:03:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.744 18:03:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.744 18:03:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.744 18:03:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.744 ************************************ 00:04:56.745 START TEST dpdk_mem_utility 00:04:56.745 ************************************ 00:04:56.745 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.745 * Looking for test storage... 00:04:57.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:57.006 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.006 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.006 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.006 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.006 18:03:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:57.006 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.006 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.006 --rc genhtml_branch_coverage=1 00:04:57.006 --rc genhtml_function_coverage=1 00:04:57.006 --rc genhtml_legend=1 00:04:57.006 --rc geninfo_all_blocks=1 00:04:57.006 --rc geninfo_unexecuted_blocks=1 00:04:57.006 00:04:57.006 ' 00:04:57.006 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.006 --rc genhtml_branch_coverage=1 00:04:57.006 --rc genhtml_function_coverage=1 00:04:57.006 --rc genhtml_legend=1 00:04:57.006 --rc geninfo_all_blocks=1 00:04:57.006 --rc geninfo_unexecuted_blocks=1 00:04:57.006 00:04:57.006 ' 00:04:57.006 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.006 --rc genhtml_branch_coverage=1 00:04:57.006 --rc genhtml_function_coverage=1 00:04:57.006 --rc genhtml_legend=1 00:04:57.006 --rc geninfo_all_blocks=1 00:04:57.006 --rc geninfo_unexecuted_blocks=1 00:04:57.006 00:04:57.006 ' 00:04:57.006 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.006 --rc genhtml_branch_coverage=1 00:04:57.007 --rc genhtml_function_coverage=1 00:04:57.007 --rc genhtml_legend=1 00:04:57.007 --rc geninfo_all_blocks=1 00:04:57.007 --rc geninfo_unexecuted_blocks=1 00:04:57.007 00:04:57.007 ' 00:04:57.007 18:03:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:57.007 18:03:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1753936 00:04:57.007 18:03:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1753936 00:04:57.007 18:03:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.007 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1753936 ']' 00:04:57.007 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.007 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.007 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.007 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.007 18:03:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.007 [2024-11-19 18:03:58.377880] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:57.007 [2024-11-19 18:03:58.377949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753936 ] 00:04:57.007 [2024-11-19 18:03:58.465205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.267 [2024-11-19 18:03:58.499457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.838 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.838 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:57.838 18:03:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:57.838 18:03:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:57.838 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.838 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.838 { 00:04:57.838 "filename": "/tmp/spdk_mem_dump.txt" 00:04:57.838 } 00:04:57.838 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.838 18:03:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:57.838 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:57.838 1 heaps totaling size 810.000000 MiB 00:04:57.838 size: 810.000000 MiB heap id: 0 00:04:57.838 end heaps---------- 00:04:57.838 9 mempools totaling size 595.772034 MiB 00:04:57.838 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:57.838 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:57.838 size: 92.545471 MiB name: bdev_io_1753936 00:04:57.838 size: 50.003479 MiB name: msgpool_1753936 00:04:57.838 size: 36.509338 MiB name: fsdev_io_1753936 00:04:57.838 size: 21.763794 MiB name: PDU_Pool 00:04:57.838 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:57.838 size: 4.133484 MiB name: evtpool_1753936 00:04:57.838 size: 0.026123 MiB name: Session_Pool 00:04:57.838 end mempools------- 00:04:57.838 6 memzones totaling size 4.142822 MiB 00:04:57.838 size: 1.000366 MiB name: RG_ring_0_1753936 00:04:57.838 size: 1.000366 MiB name: RG_ring_1_1753936 00:04:57.838 size: 1.000366 MiB name: RG_ring_4_1753936 00:04:57.838 size: 1.000366 MiB name: RG_ring_5_1753936 00:04:57.838 size: 0.125366 MiB name: RG_ring_2_1753936 00:04:57.838 size: 0.015991 MiB name: RG_ring_3_1753936 00:04:57.838 end memzones------- 00:04:57.838 18:03:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:57.838 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:57.838 list of free elements. size: 10.862488 MiB 00:04:57.838 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:57.838 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:57.838 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:57.838 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:57.838 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:57.838 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:57.838 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:57.838 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:57.838 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:57.838 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:57.838 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:57.838 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:57.838 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:57.838 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:57.838 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:57.838 list of standard malloc elements. size: 199.218628 MiB 00:04:57.838 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:57.838 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:57.839 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:57.839 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:57.839 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:57.839 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:57.839 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:57.839 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:57.839 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:57.839 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:57.839 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:57.839 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:57.839 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:57.839 list of memzone associated elements. size: 599.918884 MiB 00:04:57.839 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:57.839 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:57.839 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:57.839 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:57.839 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:57.839 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1753936_0 00:04:57.839 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:57.839 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1753936_0 00:04:57.839 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:57.839 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1753936_0 00:04:57.839 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:57.839 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:57.839 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:57.839 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:57.839 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:57.839 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1753936_0 00:04:57.839 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:57.839 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1753936 00:04:57.839 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:57.839 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1753936 00:04:57.839 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:57.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:57.839 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:57.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:57.839 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:57.839 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:57.839 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:57.839 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:57.839 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:57.839 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1753936 00:04:57.839 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:57.839 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1753936 00:04:57.839 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:57.839 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1753936 00:04:57.839 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:57.839 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1753936 00:04:57.839 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:57.839 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1753936 00:04:57.839 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:57.839 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1753936 00:04:57.839 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:57.839 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:57.839 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:57.839 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:57.839 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:57.839 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:57.839 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:57.839 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1753936 00:04:57.839 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:57.839 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1753936 00:04:57.839 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:57.839 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:57.839 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:57.839 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:57.839 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:57.839 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1753936 00:04:57.839 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:57.839 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:57.839 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:57.839 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1753936 00:04:57.839 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:57.839 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1753936 00:04:57.839 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:57.839 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1753936 00:04:57.839 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:57.839 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:57.839 18:03:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:57.839 18:03:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1753936 00:04:57.839 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1753936 ']' 00:04:57.839 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1753936 00:04:57.839 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:57.839 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.839 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1753936 00:04:58.099 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.099 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.099 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1753936' 00:04:58.099 killing process with pid 1753936 00:04:58.099 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1753936 00:04:58.099 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1753936 00:04:58.099 00:04:58.099 real 0m1.423s 00:04:58.099 user 0m1.507s 00:04:58.099 sys 0m0.425s 00:04:58.099 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.099 18:03:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.099 ************************************ 00:04:58.099 END TEST dpdk_mem_utility 00:04:58.099 ************************************ 00:04:58.360 18:03:59 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:58.360 18:03:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.360 18:03:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.360 18:03:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.360 ************************************ 00:04:58.360 START TEST event 00:04:58.360 ************************************ 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:58.360 * Looking for test storage... 00:04:58.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.360 18:03:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.360 18:03:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.360 18:03:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.360 18:03:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.360 18:03:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.360 18:03:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.360 18:03:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.360 18:03:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.360 18:03:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.360 18:03:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.360 18:03:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.360 18:03:59 event -- scripts/common.sh@344 -- # case "$op" in 00:04:58.360 18:03:59 event -- scripts/common.sh@345 -- # : 1 00:04:58.360 18:03:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.360 18:03:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.360 18:03:59 event -- scripts/common.sh@365 -- # decimal 1 00:04:58.360 18:03:59 event -- scripts/common.sh@353 -- # local d=1 00:04:58.360 18:03:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.360 18:03:59 event -- scripts/common.sh@355 -- # echo 1 00:04:58.360 18:03:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.360 18:03:59 event -- scripts/common.sh@366 -- # decimal 2 00:04:58.360 18:03:59 event -- scripts/common.sh@353 -- # local d=2 00:04:58.360 18:03:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.360 18:03:59 event -- scripts/common.sh@355 -- # echo 2 00:04:58.360 18:03:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.360 18:03:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.360 18:03:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.360 18:03:59 event -- scripts/common.sh@368 -- # return 0 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.360 --rc genhtml_branch_coverage=1 00:04:58.360 --rc genhtml_function_coverage=1 00:04:58.360 --rc genhtml_legend=1 00:04:58.360 --rc geninfo_all_blocks=1 00:04:58.360 --rc geninfo_unexecuted_blocks=1 00:04:58.360 00:04:58.360 ' 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.360 --rc genhtml_branch_coverage=1 00:04:58.360 --rc genhtml_function_coverage=1 00:04:58.360 --rc genhtml_legend=1 00:04:58.360 --rc geninfo_all_blocks=1 00:04:58.360 --rc geninfo_unexecuted_blocks=1 00:04:58.360 00:04:58.360 ' 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.360 --rc genhtml_branch_coverage=1 00:04:58.360 --rc genhtml_function_coverage=1 00:04:58.360 --rc genhtml_legend=1 00:04:58.360 --rc geninfo_all_blocks=1 00:04:58.360 --rc geninfo_unexecuted_blocks=1 00:04:58.360 00:04:58.360 ' 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.360 --rc genhtml_branch_coverage=1 00:04:58.360 --rc genhtml_function_coverage=1 00:04:58.360 --rc genhtml_legend=1 00:04:58.360 --rc geninfo_all_blocks=1 00:04:58.360 --rc geninfo_unexecuted_blocks=1 00:04:58.360 00:04:58.360 ' 00:04:58.360 18:03:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:58.360 18:03:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:58.360 18:03:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:58.360 18:03:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.360 18:03:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.620 ************************************ 00:04:58.620 START TEST event_perf 00:04:58.620 ************************************ 00:04:58.620 18:03:59 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.620 Running I/O for 1 seconds...[2024-11-19 18:03:59.874964] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:04:58.620 [2024-11-19 18:03:59.875069] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754264 ] 00:04:58.620 [2024-11-19 18:03:59.967017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.620 [2024-11-19 18:04:00.011073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.620 [2024-11-19 18:04:00.011197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.620 [2024-11-19 18:04:00.011281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.620 [2024-11-19 18:04:00.011282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.003 Running I/O for 1 seconds... 00:05:00.003 lcore 0: 179168 00:05:00.003 lcore 1: 179171 00:05:00.003 lcore 2: 179169 00:05:00.003 lcore 3: 179170 00:05:00.003 done. 00:05:00.003 00:05:00.003 real 0m1.187s 00:05:00.003 user 0m4.094s 00:05:00.003 sys 0m0.089s 00:05:00.003 18:04:01 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.003 18:04:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.003 ************************************ 00:05:00.003 END TEST event_perf 00:05:00.003 ************************************ 00:05:00.003 18:04:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.003 18:04:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:00.003 18:04:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.003 18:04:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.003 ************************************ 00:05:00.003 START TEST event_reactor 00:05:00.003 ************************************ 00:05:00.003 18:04:01 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.003 [2024-11-19 18:04:01.140328] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:00.003 [2024-11-19 18:04:01.140430] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754601 ] 00:05:00.003 [2024-11-19 18:04:01.226632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.003 [2024-11-19 18:04:01.258289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.945 test_start 00:05:00.945 oneshot 00:05:00.945 tick 100 00:05:00.945 tick 100 00:05:00.945 tick 250 00:05:00.945 tick 100 00:05:00.945 tick 100 00:05:00.945 tick 100 00:05:00.945 tick 250 00:05:00.945 tick 500 00:05:00.945 tick 100 00:05:00.945 tick 100 00:05:00.945 tick 250 00:05:00.945 tick 100 00:05:00.945 tick 100 00:05:00.945 test_end 00:05:00.945 00:05:00.945 real 0m1.166s 00:05:00.945 user 0m1.080s 00:05:00.945 sys 0m0.082s 00:05:00.945 18:04:02 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.945 18:04:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:00.945 ************************************ 00:05:00.945 END TEST event_reactor 00:05:00.945 ************************************ 00:05:00.945 18:04:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.945 18:04:02 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:00.945 18:04:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.945 18:04:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.945 ************************************ 00:05:00.945 START TEST event_reactor_perf 00:05:00.945 ************************************ 00:05:00.945 18:04:02 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.945 [2024-11-19 18:04:02.386670] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:00.945 [2024-11-19 18:04:02.386766] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754952 ] 00:05:01.205 [2024-11-19 18:04:02.474110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.205 [2024-11-19 18:04:02.507404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.149 test_start 00:05:02.149 test_end 00:05:02.149 Performance: 541382 events per second 00:05:02.149 00:05:02.149 real 0m1.169s 00:05:02.149 user 0m1.086s 00:05:02.149 sys 0m0.080s 00:05:02.149 18:04:03 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.149 18:04:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.149 ************************************ 00:05:02.149 END TEST event_reactor_perf 00:05:02.149 ************************************ 00:05:02.149 18:04:03 event -- event/event.sh@49 -- # uname -s 00:05:02.149 18:04:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.149 18:04:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:02.149 18:04:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.149 18:04:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.149 18:04:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.412 ************************************ 00:05:02.412 START TEST event_scheduler 00:05:02.412 ************************************ 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:02.412 * Looking for test storage... 00:05:02.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.412 18:04:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.412 --rc genhtml_branch_coverage=1 00:05:02.412 --rc genhtml_function_coverage=1 00:05:02.412 --rc genhtml_legend=1 00:05:02.412 --rc geninfo_all_blocks=1 00:05:02.412 --rc geninfo_unexecuted_blocks=1 00:05:02.412 00:05:02.412 ' 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.412 --rc genhtml_branch_coverage=1 00:05:02.412 --rc genhtml_function_coverage=1 00:05:02.412 --rc genhtml_legend=1 00:05:02.412 --rc geninfo_all_blocks=1 00:05:02.412 --rc geninfo_unexecuted_blocks=1 00:05:02.412 00:05:02.412 ' 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.412 --rc genhtml_branch_coverage=1 00:05:02.412 --rc genhtml_function_coverage=1 00:05:02.412 --rc genhtml_legend=1 00:05:02.412 --rc geninfo_all_blocks=1 00:05:02.412 --rc geninfo_unexecuted_blocks=1 00:05:02.412 00:05:02.412 ' 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.412 --rc genhtml_branch_coverage=1 00:05:02.412 --rc genhtml_function_coverage=1 00:05:02.412 --rc genhtml_legend=1 00:05:02.412 --rc geninfo_all_blocks=1 00:05:02.412 --rc geninfo_unexecuted_blocks=1 00:05:02.412 00:05:02.412 ' 00:05:02.412 18:04:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.412 18:04:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1755321 00:05:02.412 18:04:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.412 18:04:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1755321 00:05:02.412 18:04:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1755321 ']' 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.412 18:04:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.412 [2024-11-19 18:04:03.869648] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:02.412 [2024-11-19 18:04:03.869726] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755321 ] 00:05:02.674 [2024-11-19 18:04:03.960601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.674 [2024-11-19 18:04:04.016657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.674 [2024-11-19 18:04:04.016822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.674 [2024-11-19 18:04:04.016983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.674 [2024-11-19 18:04:04.016983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.245 18:04:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.245 18:04:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:03.245 18:04:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.245 18:04:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.245 18:04:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.245 [2024-11-19 18:04:04.691338] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:03.245 [2024-11-19 18:04:04.691358] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:03.245 [2024-11-19 18:04:04.691368] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:03.245 [2024-11-19 18:04:04.691374] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:03.245 [2024-11-19 18:04:04.691380] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:03.245 18:04:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.245 18:04:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.245 18:04:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.245 18:04:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 [2024-11-19 18:04:04.758208] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.507 18:04:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.507 18:04:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.507 18:04:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.507 18:04:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 ************************************ 00:05:03.507 START TEST scheduler_create_thread 00:05:03.507 ************************************ 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 2 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 3 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 4 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 5 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 6 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 7 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 8 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.507 9 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.507 18:04:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.080 10 00:05:04.080 18:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.080 18:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:04.080 18:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.080 18:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.466 18:04:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.466 18:04:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:05.466 18:04:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:05.466 18:04:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.466 18:04:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.038 18:04:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.038 18:04:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:06.038 18:04:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.038 18:04:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.981 18:04:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.981 18:04:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:06.981 18:04:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:06.981 18:04:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.981 18:04:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.553 18:04:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.553 00:05:07.553 real 0m4.225s 00:05:07.553 user 0m0.025s 00:05:07.553 sys 0m0.007s 00:05:07.553 18:04:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.553 18:04:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.553 ************************************ 00:05:07.553 END TEST scheduler_create_thread 00:05:07.553 ************************************ 00:05:07.813 18:04:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.813 18:04:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1755321 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1755321 ']' 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1755321 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1755321 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1755321' 00:05:07.813 killing process with pid 1755321 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1755321 00:05:07.813 18:04:09 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1755321 00:05:08.073 [2024-11-19 18:04:09.299820] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:08.073 00:05:08.073 real 0m5.837s 00:05:08.073 user 0m12.910s 00:05:08.073 sys 0m0.416s 00:05:08.073 18:04:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.073 18:04:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.073 ************************************ 00:05:08.073 END TEST event_scheduler 00:05:08.073 ************************************ 00:05:08.073 18:04:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:08.073 18:04:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:08.073 18:04:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.073 18:04:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.073 18:04:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.073 ************************************ 00:05:08.073 START TEST app_repeat 00:05:08.073 ************************************ 00:05:08.362 18:04:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1756409 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1756409' 00:05:08.362 Process app_repeat pid: 1756409 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:08.362 spdk_app_start Round 0 00:05:08.362 18:04:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1756409 /var/tmp/spdk-nbd.sock 00:05:08.362 18:04:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1756409 ']' 00:05:08.362 18:04:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.362 18:04:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.362 18:04:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.362 18:04:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.362 18:04:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.362 [2024-11-19 18:04:09.578561] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:08.362 [2024-11-19 18:04:09.578624] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756409 ] 00:05:08.362 [2024-11-19 18:04:09.665431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.362 [2024-11-19 18:04:09.700008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.362 [2024-11-19 18:04:09.700010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.362 18:04:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.363 18:04:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:08.363 18:04:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.622 Malloc0 00:05:08.622 18:04:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.882 Malloc1 00:05:08.882 18:04:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.882 18:04:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.140 /dev/nbd0 00:05:09.140 18:04:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.140 18:04:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.140 1+0 records in 00:05:09.140 1+0 records out 00:05:09.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287589 s, 14.2 MB/s 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.140 18:04:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.141 18:04:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.141 18:04:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.141 18:04:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.141 18:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.141 18:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.141 18:04:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.141 /dev/nbd1 00:05:09.141 18:04:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.141 18:04:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.141 18:04:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:09.141 18:04:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.141 18:04:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.141 18:04:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.141 18:04:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:09.400 18:04:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.400 18:04:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.400 18:04:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.400 18:04:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.400 1+0 records in 00:05:09.400 1+0 records out 00:05:09.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208653 s, 19.6 MB/s 00:05:09.400 18:04:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.400 18:04:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.400 18:04:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.400 18:04:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.400 18:04:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.400 { 00:05:09.400 "nbd_device": "/dev/nbd0", 00:05:09.400 "bdev_name": "Malloc0" 00:05:09.400 }, 00:05:09.400 { 00:05:09.400 "nbd_device": "/dev/nbd1", 00:05:09.400 "bdev_name": "Malloc1" 00:05:09.400 } 00:05:09.400 ]' 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.400 { 00:05:09.400 "nbd_device": "/dev/nbd0", 00:05:09.400 "bdev_name": "Malloc0" 00:05:09.400 }, 00:05:09.400 { 00:05:09.400 "nbd_device": "/dev/nbd1", 00:05:09.400 "bdev_name": "Malloc1" 00:05:09.400 } 00:05:09.400 ]' 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.400 /dev/nbd1' 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.400 /dev/nbd1' 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.400 18:04:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.660 256+0 records in 00:05:09.660 256+0 records out 00:05:09.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127641 s, 82.2 MB/s 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.660 256+0 records in 00:05:09.660 256+0 records out 00:05:09.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118174 s, 88.7 MB/s 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.660 256+0 records in 00:05:09.660 256+0 records out 00:05:09.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133463 s, 78.6 MB/s 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.660 18:04:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.660 18:04:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.920 18:04:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.179 18:04:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.179 18:04:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.179 18:04:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.179 18:04:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.179 18:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.179 18:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.179 18:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.179 18:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.179 18:04:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.180 18:04:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.180 18:04:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.180 18:04:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.180 18:04:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.440 18:04:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.440 [2024-11-19 18:04:11.821947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.440 [2024-11-19 18:04:11.852068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.440 [2024-11-19 18:04:11.852069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.440 [2024-11-19 18:04:11.881247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.440 [2024-11-19 18:04:11.881281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.741 18:04:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.741 18:04:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:13.741 spdk_app_start Round 1 00:05:13.741 18:04:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1756409 /var/tmp/spdk-nbd.sock 00:05:13.741 18:04:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1756409 ']' 00:05:13.741 18:04:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.741 18:04:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.741 18:04:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.741 18:04:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.741 18:04:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.741 18:04:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.741 18:04:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:13.741 18:04:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.741 Malloc0 00:05:13.741 18:04:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.002 Malloc1 00:05:14.002 18:04:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.002 18:04:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.002 18:04:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.003 18:04:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.265 /dev/nbd0 00:05:14.265 18:04:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.265 18:04:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.265 1+0 records in 00:05:14.265 1+0 records out 00:05:14.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292783 s, 14.0 MB/s 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.265 18:04:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.265 18:04:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.265 18:04:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.265 18:04:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.265 /dev/nbd1 00:05:14.526 18:04:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.526 18:04:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.526 1+0 records in 00:05:14.526 1+0 records out 00:05:14.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277562 s, 14.8 MB/s 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.526 18:04:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.526 18:04:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.526 18:04:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.526 18:04:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.526 18:04:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.526 18:04:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.526 18:04:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.526 { 00:05:14.526 "nbd_device": "/dev/nbd0", 00:05:14.526 "bdev_name": "Malloc0" 00:05:14.526 }, 00:05:14.526 { 00:05:14.526 "nbd_device": "/dev/nbd1", 00:05:14.526 "bdev_name": "Malloc1" 00:05:14.526 } 00:05:14.526 ]' 00:05:14.526 18:04:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.526 { 00:05:14.526 "nbd_device": "/dev/nbd0", 00:05:14.526 "bdev_name": "Malloc0" 00:05:14.526 }, 00:05:14.526 { 00:05:14.526 "nbd_device": "/dev/nbd1", 00:05:14.526 "bdev_name": "Malloc1" 00:05:14.526 } 00:05:14.527 ]' 00:05:14.527 18:04:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.787 /dev/nbd1' 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.787 /dev/nbd1' 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.787 256+0 records in 00:05:14.787 256+0 records out 00:05:14.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120521 s, 87.0 MB/s 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.787 256+0 records in 00:05:14.787 256+0 records out 00:05:14.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120044 s, 87.3 MB/s 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.787 256+0 records in 00:05:14.787 256+0 records out 00:05:14.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134749 s, 77.8 MB/s 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.787 18:04:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.788 18:04:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.788 18:04:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.788 18:04:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.788 18:04:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.788 18:04:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.788 18:04:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.788 18:04:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.049 18:04:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.310 18:04:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.310 18:04:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.571 18:04:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.571 [2024-11-19 18:04:16.963743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.571 [2024-11-19 18:04:16.993793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.571 [2024-11-19 18:04:16.993793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.571 [2024-11-19 18:04:17.023259] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.571 [2024-11-19 18:04:17.023292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.873 18:04:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.873 18:04:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:18.873 spdk_app_start Round 2 00:05:18.873 18:04:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1756409 /var/tmp/spdk-nbd.sock 00:05:18.873 18:04:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1756409 ']' 00:05:18.873 18:04:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.873 18:04:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.873 18:04:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.873 18:04:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.873 18:04:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.873 18:04:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.873 18:04:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:18.873 18:04:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.873 Malloc0 00:05:18.873 18:04:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.135 Malloc1 00:05:19.135 18:04:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.135 18:04:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.396 /dev/nbd0 00:05:19.396 18:04:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.396 18:04:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.396 1+0 records in 00:05:19.396 1+0 records out 00:05:19.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274167 s, 14.9 MB/s 00:05:19.396 18:04:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.397 18:04:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:19.397 18:04:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.397 18:04:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:19.397 18:04:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:19.397 18:04:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.397 18:04:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.397 18:04:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.658 /dev/nbd1 00:05:19.658 18:04:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.658 18:04:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.658 1+0 records in 00:05:19.658 1+0 records out 00:05:19.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324967 s, 12.6 MB/s 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:19.658 18:04:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:19.658 18:04:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.658 18:04:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.658 18:04:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.658 18:04:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.658 18:04:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.658 18:04:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.658 { 00:05:19.658 "nbd_device": "/dev/nbd0", 00:05:19.658 "bdev_name": "Malloc0" 00:05:19.658 }, 00:05:19.658 { 00:05:19.658 "nbd_device": "/dev/nbd1", 00:05:19.658 "bdev_name": "Malloc1" 00:05:19.658 } 00:05:19.658 ]' 00:05:19.658 18:04:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.658 { 00:05:19.658 "nbd_device": "/dev/nbd0", 00:05:19.658 "bdev_name": "Malloc0" 00:05:19.658 }, 00:05:19.658 { 00:05:19.658 "nbd_device": "/dev/nbd1", 00:05:19.658 "bdev_name": "Malloc1" 00:05:19.658 } 00:05:19.658 ]' 00:05:19.658 18:04:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.919 /dev/nbd1' 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.919 /dev/nbd1' 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.919 256+0 records in 00:05:19.919 256+0 records out 00:05:19.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118997 s, 88.1 MB/s 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.919 256+0 records in 00:05:19.919 256+0 records out 00:05:19.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121307 s, 86.4 MB/s 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.919 256+0 records in 00:05:19.919 256+0 records out 00:05:19.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130736 s, 80.2 MB/s 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.919 18:04:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.920 18:04:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.181 18:04:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.442 18:04:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.442 18:04:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.703 18:04:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.703 [2024-11-19 18:04:22.115767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.703 [2024-11-19 18:04:22.145553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.703 [2024-11-19 18:04:22.145554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.963 [2024-11-19 18:04:22.174658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.963 [2024-11-19 18:04:22.174689] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.267 18:04:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1756409 /var/tmp/spdk-nbd.sock 00:05:24.267 18:04:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1756409 ']' 00:05:24.267 18:04:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:24.268 18:04:25 event.app_repeat -- event/event.sh@39 -- # killprocess 1756409 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1756409 ']' 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1756409 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1756409 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1756409' 00:05:24.268 killing process with pid 1756409 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1756409 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1756409 00:05:24.268 spdk_app_start is called in Round 0. 00:05:24.268 Shutdown signal received, stop current app iteration 00:05:24.268 Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 reinitialization... 00:05:24.268 spdk_app_start is called in Round 1. 00:05:24.268 Shutdown signal received, stop current app iteration 00:05:24.268 Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 reinitialization... 00:05:24.268 spdk_app_start is called in Round 2. 00:05:24.268 Shutdown signal received, stop current app iteration 00:05:24.268 Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 reinitialization... 00:05:24.268 spdk_app_start is called in Round 3. 00:05:24.268 Shutdown signal received, stop current app iteration 00:05:24.268 18:04:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:24.268 18:04:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:24.268 00:05:24.268 real 0m15.837s 00:05:24.268 user 0m34.787s 00:05:24.268 sys 0m2.294s 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.268 18:04:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.268 ************************************ 00:05:24.268 END TEST app_repeat 00:05:24.268 ************************************ 00:05:24.268 18:04:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:24.268 18:04:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:24.268 18:04:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.268 18:04:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.268 18:04:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.268 ************************************ 00:05:24.268 START TEST cpu_locks 00:05:24.268 ************************************ 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:24.268 * Looking for test storage... 00:05:24.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.268 18:04:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.268 --rc genhtml_branch_coverage=1 00:05:24.268 --rc genhtml_function_coverage=1 00:05:24.268 --rc genhtml_legend=1 00:05:24.268 --rc geninfo_all_blocks=1 00:05:24.268 --rc geninfo_unexecuted_blocks=1 00:05:24.268 00:05:24.268 ' 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.268 --rc genhtml_branch_coverage=1 00:05:24.268 --rc genhtml_function_coverage=1 00:05:24.268 --rc genhtml_legend=1 00:05:24.268 --rc geninfo_all_blocks=1 00:05:24.268 --rc geninfo_unexecuted_blocks=1 00:05:24.268 00:05:24.268 ' 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.268 --rc genhtml_branch_coverage=1 00:05:24.268 --rc genhtml_function_coverage=1 00:05:24.268 --rc genhtml_legend=1 00:05:24.268 --rc geninfo_all_blocks=1 00:05:24.268 --rc geninfo_unexecuted_blocks=1 00:05:24.268 00:05:24.268 ' 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.268 --rc genhtml_branch_coverage=1 00:05:24.268 --rc genhtml_function_coverage=1 00:05:24.268 --rc genhtml_legend=1 00:05:24.268 --rc geninfo_all_blocks=1 00:05:24.268 --rc geninfo_unexecuted_blocks=1 00:05:24.268 00:05:24.268 ' 00:05:24.268 18:04:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:24.268 18:04:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:24.268 18:04:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:24.268 18:04:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.268 18:04:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.268 ************************************ 00:05:24.268 START TEST default_locks 00:05:24.268 ************************************ 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1759875 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1759875 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1759875 ']' 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.268 18:04:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.530 [2024-11-19 18:04:25.768273] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:24.530 [2024-11-19 18:04:25.768339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1759875 ] 00:05:24.530 [2024-11-19 18:04:25.853743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.530 [2024-11-19 18:04:25.893241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.473 18:04:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.473 18:04:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:25.473 18:04:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1759875 00:05:25.473 18:04:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1759875 00:05:25.473 18:04:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.734 lslocks: write error 00:05:25.734 18:04:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1759875 00:05:25.734 18:04:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1759875 ']' 00:05:25.734 18:04:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1759875 00:05:25.734 18:04:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:25.734 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.734 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1759875 00:05:25.734 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.734 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.734 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1759875' 00:05:25.734 killing process with pid 1759875 00:05:25.734 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1759875 00:05:25.734 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1759875 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1759875 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1759875 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1759875 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1759875 ']' 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1759875) - No such process 00:05:25.997 ERROR: process (pid: 1759875) is no longer running 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:25.997 18:04:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:25.998 18:04:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:25.998 18:04:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:25.998 00:05:25.998 real 0m1.548s 00:05:25.998 user 0m1.678s 00:05:25.998 sys 0m0.545s 00:05:25.998 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.998 18:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 ************************************ 00:05:25.998 END TEST default_locks 00:05:25.998 ************************************ 00:05:25.998 18:04:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:25.998 18:04:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.998 18:04:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.998 18:04:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 ************************************ 00:05:25.998 START TEST default_locks_via_rpc 00:05:25.998 ************************************ 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1760213 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1760213 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1760213 ']' 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.998 18:04:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 [2024-11-19 18:04:27.390736] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:25.998 [2024-11-19 18:04:27.390795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760213 ] 00:05:26.259 [2024-11-19 18:04:27.476741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.259 [2024-11-19 18:04:27.510844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1760213 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1760213 00:05:26.830 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1760213 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1760213 ']' 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1760213 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1760213 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1760213' 00:05:27.401 killing process with pid 1760213 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1760213 00:05:27.401 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1760213 00:05:27.662 00:05:27.662 real 0m1.607s 00:05:27.662 user 0m1.710s 00:05:27.662 sys 0m0.568s 00:05:27.662 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.662 18:04:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.662 ************************************ 00:05:27.662 END TEST default_locks_via_rpc 00:05:27.662 ************************************ 00:05:27.662 18:04:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:27.662 18:04:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.662 18:04:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.662 18:04:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.662 ************************************ 00:05:27.662 START TEST non_locking_app_on_locked_coremask 00:05:27.662 ************************************ 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1760562 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1760562 /var/tmp/spdk.sock 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1760562 ']' 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.662 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.662 [2024-11-19 18:04:29.068987] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:27.662 [2024-11-19 18:04:29.069045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760562 ] 00:05:27.926 [2024-11-19 18:04:29.156394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.926 [2024-11-19 18:04:29.196286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1760740 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1760740 /var/tmp/spdk2.sock 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1760740 ']' 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.497 18:04:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.497 [2024-11-19 18:04:29.924528] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:28.497 [2024-11-19 18:04:29.924579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760740 ] 00:05:28.756 [2024-11-19 18:04:30.010649] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.756 [2024-11-19 18:04:30.010677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.756 [2024-11-19 18:04:30.073926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.327 18:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.327 18:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:29.327 18:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1760562 00:05:29.327 18:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1760562 00:05:29.327 18:04:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.898 lslocks: write error 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1760562 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1760562 ']' 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1760562 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1760562 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1760562' 00:05:29.898 killing process with pid 1760562 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1760562 00:05:29.898 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1760562 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1760740 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1760740 ']' 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1760740 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1760740 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1760740' 00:05:30.469 killing process with pid 1760740 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1760740 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1760740 00:05:30.469 00:05:30.469 real 0m2.898s 00:05:30.469 user 0m3.250s 00:05:30.469 sys 0m0.859s 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.469 18:04:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.469 ************************************ 00:05:30.469 END TEST non_locking_app_on_locked_coremask 00:05:30.469 ************************************ 00:05:30.729 18:04:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.729 18:04:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.729 18:04:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.729 18:04:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.729 ************************************ 00:05:30.729 START TEST locking_app_on_unlocked_coremask 00:05:30.729 ************************************ 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1761119 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1761119 /var/tmp/spdk.sock 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1761119 ']' 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.729 18:04:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.729 [2024-11-19 18:04:32.044402] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:30.729 [2024-11-19 18:04:32.044457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761119 ] 00:05:30.729 [2024-11-19 18:04:32.129698] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.729 [2024-11-19 18:04:32.129723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.729 [2024-11-19 18:04:32.160611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1761448 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1761448 /var/tmp/spdk2.sock 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1761448 ']' 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.670 18:04:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.670 [2024-11-19 18:04:32.865279] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:31.670 [2024-11-19 18:04:32.865334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761448 ] 00:05:31.670 [2024-11-19 18:04:32.950080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.670 [2024-11-19 18:04:33.008414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.240 18:04:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.240 18:04:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.240 18:04:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1761448 00:05:32.240 18:04:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1761448 00:05:32.240 18:04:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.813 lslocks: write error 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1761119 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1761119 ']' 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1761119 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761119 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761119' 00:05:32.813 killing process with pid 1761119 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1761119 00:05:32.813 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1761119 00:05:33.073 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1761448 00:05:33.073 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1761448 ']' 00:05:33.073 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1761448 00:05:33.073 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.073 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.074 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761448 00:05:33.074 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.074 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.074 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761448' 00:05:33.074 killing process with pid 1761448 00:05:33.074 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1761448 00:05:33.074 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1761448 00:05:33.334 00:05:33.334 real 0m2.700s 00:05:33.334 user 0m3.016s 00:05:33.334 sys 0m0.783s 00:05:33.334 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.334 18:04:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.334 ************************************ 00:05:33.334 END TEST locking_app_on_unlocked_coremask 00:05:33.335 ************************************ 00:05:33.335 18:04:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:33.335 18:04:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.335 18:04:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.335 18:04:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.335 ************************************ 00:05:33.335 START TEST locking_app_on_locked_coremask 00:05:33.335 ************************************ 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1761822 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1761822 /var/tmp/spdk.sock 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1761822 ']' 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.335 18:04:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.595 [2024-11-19 18:04:34.828155] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:33.595 [2024-11-19 18:04:34.828211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761822 ] 00:05:33.595 [2024-11-19 18:04:34.911442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.595 [2024-11-19 18:04:34.942062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1761890 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1761890 /var/tmp/spdk2.sock 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1761890 /var/tmp/spdk2.sock 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1761890 /var/tmp/spdk2.sock 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1761890 ']' 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.166 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.167 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.167 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.167 18:04:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.427 [2024-11-19 18:04:35.669869] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:34.427 [2024-11-19 18:04:35.669923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761890 ] 00:05:34.427 [2024-11-19 18:04:35.756972] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1761822 has claimed it. 00:05:34.427 [2024-11-19 18:04:35.757007] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:34.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1761890) - No such process 00:05:34.999 ERROR: process (pid: 1761890) is no longer running 00:05:34.999 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.999 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:34.999 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:34.999 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.999 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.999 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.999 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1761822 00:05:34.999 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1761822 00:05:34.999 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.569 lslocks: write error 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1761822 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1761822 ']' 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1761822 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761822 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761822' 00:05:35.569 killing process with pid 1761822 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1761822 00:05:35.569 18:04:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1761822 00:05:35.569 00:05:35.569 real 0m2.255s 00:05:35.569 user 0m2.561s 00:05:35.569 sys 0m0.627s 00:05:35.569 18:04:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.569 18:04:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.569 ************************************ 00:05:35.569 END TEST locking_app_on_locked_coremask 00:05:35.569 ************************************ 00:05:35.830 18:04:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:35.830 18:04:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.830 18:04:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.830 18:04:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.830 ************************************ 00:05:35.830 START TEST locking_overlapped_coremask 00:05:35.830 ************************************ 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1762201 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1762201 /var/tmp/spdk.sock 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1762201 ']' 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.830 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.830 [2024-11-19 18:04:37.148741] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:35.830 [2024-11-19 18:04:37.148797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762201 ] 00:05:35.830 [2024-11-19 18:04:37.232367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.830 [2024-11-19 18:04:37.266819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.830 [2024-11-19 18:04:37.266973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.830 [2024-11-19 18:04:37.266975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1762535 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1762535 /var/tmp/spdk2.sock 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1762535 /var/tmp/spdk2.sock 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1762535 /var/tmp/spdk2.sock 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1762535 ']' 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.772 18:04:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.772 [2024-11-19 18:04:38.014866] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:36.772 [2024-11-19 18:04:38.014919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762535 ] 00:05:36.772 [2024-11-19 18:04:38.127179] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1762201 has claimed it. 00:05:36.772 [2024-11-19 18:04:38.127220] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1762535) - No such process 00:05:37.344 ERROR: process (pid: 1762535) is no longer running 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1762201 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1762201 ']' 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1762201 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762201 00:05:37.344 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.345 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.345 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762201' 00:05:37.345 killing process with pid 1762201 00:05:37.345 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1762201 00:05:37.345 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1762201 00:05:37.607 00:05:37.607 real 0m1.784s 00:05:37.607 user 0m5.178s 00:05:37.607 sys 0m0.385s 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.607 ************************************ 00:05:37.607 END TEST locking_overlapped_coremask 00:05:37.607 ************************************ 00:05:37.607 18:04:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:37.607 18:04:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.607 18:04:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.607 18:04:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.607 ************************************ 00:05:37.607 START TEST locking_overlapped_coremask_via_rpc 00:05:37.607 ************************************ 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1762599 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1762599 /var/tmp/spdk.sock 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1762599 ']' 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.607 18:04:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.607 [2024-11-19 18:04:39.009077] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:37.607 [2024-11-19 18:04:39.009142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762599 ] 00:05:37.871 [2024-11-19 18:04:39.095673] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.871 [2024-11-19 18:04:39.095702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.872 [2024-11-19 18:04:39.131538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.872 [2024-11-19 18:04:39.131683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.872 [2024-11-19 18:04:39.131685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1762911 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1762911 /var/tmp/spdk2.sock 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1762911 ']' 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.454 18:04:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.454 [2024-11-19 18:04:39.863027] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:38.454 [2024-11-19 18:04:39.863079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762911 ] 00:05:38.717 [2024-11-19 18:04:39.976864] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.717 [2024-11-19 18:04:39.976893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.717 [2024-11-19 18:04:40.056246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.717 [2024-11-19 18:04:40.059282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.717 [2024-11-19 18:04:40.059283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.291 [2024-11-19 18:04:40.674257] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1762599 has claimed it. 00:05:39.291 request: 00:05:39.291 { 00:05:39.291 "method": "framework_enable_cpumask_locks", 00:05:39.291 "req_id": 1 00:05:39.291 } 00:05:39.291 Got JSON-RPC error response 00:05:39.291 response: 00:05:39.291 { 00:05:39.291 "code": -32603, 00:05:39.291 "message": "Failed to claim CPU core: 2" 00:05:39.291 } 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1762599 /var/tmp/spdk.sock 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1762599 ']' 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.291 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.552 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.553 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.553 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1762911 /var/tmp/spdk2.sock 00:05:39.553 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1762911 ']' 00:05:39.553 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.553 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.553 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.553 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.553 18:04:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.814 18:04:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.814 18:04:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.814 18:04:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:39.814 18:04:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:39.814 18:04:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:39.814 18:04:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:39.814 00:05:39.814 real 0m2.097s 00:05:39.814 user 0m0.851s 00:05:39.814 sys 0m0.170s 00:05:39.814 18:04:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.814 18:04:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.814 ************************************ 00:05:39.814 END TEST locking_overlapped_coremask_via_rpc 00:05:39.814 ************************************ 00:05:39.814 18:04:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:39.814 18:04:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1762599 ]] 00:05:39.814 18:04:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1762599 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1762599 ']' 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1762599 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762599 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762599' 00:05:39.815 killing process with pid 1762599 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1762599 00:05:39.815 18:04:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1762599 00:05:40.075 18:04:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1762911 ]] 00:05:40.075 18:04:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1762911 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1762911 ']' 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1762911 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762911 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762911' 00:05:40.075 killing process with pid 1762911 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1762911 00:05:40.075 18:04:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1762911 00:05:40.337 18:04:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.337 18:04:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:40.337 18:04:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1762599 ]] 00:05:40.337 18:04:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1762599 00:05:40.337 18:04:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1762599 ']' 00:05:40.337 18:04:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1762599 00:05:40.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1762599) - No such process 00:05:40.337 18:04:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1762599 is not found' 00:05:40.337 Process with pid 1762599 is not found 00:05:40.337 18:04:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1762911 ]] 00:05:40.337 18:04:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1762911 00:05:40.337 18:04:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1762911 ']' 00:05:40.337 18:04:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1762911 00:05:40.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1762911) - No such process 00:05:40.337 18:04:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1762911 is not found' 00:05:40.337 Process with pid 1762911 is not found 00:05:40.337 18:04:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:40.337 00:05:40.337 real 0m16.154s 00:05:40.337 user 0m28.304s 00:05:40.337 sys 0m4.891s 00:05:40.337 18:04:41 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.337 18:04:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.337 ************************************ 00:05:40.337 END TEST cpu_locks 00:05:40.337 ************************************ 00:05:40.337 00:05:40.337 real 0m42.038s 00:05:40.337 user 1m22.551s 00:05:40.337 sys 0m8.285s 00:05:40.337 18:04:41 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.337 18:04:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.337 ************************************ 00:05:40.337 END TEST event 00:05:40.337 ************************************ 00:05:40.337 18:04:41 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:40.337 18:04:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.337 18:04:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.337 18:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:40.337 ************************************ 00:05:40.337 START TEST thread 00:05:40.337 ************************************ 00:05:40.337 18:04:41 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:40.598 * Looking for test storage... 00:05:40.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:40.598 18:04:41 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.598 18:04:41 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.598 18:04:41 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.598 18:04:41 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.598 18:04:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.598 18:04:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.598 18:04:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.598 18:04:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.598 18:04:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.598 18:04:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.598 18:04:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.598 18:04:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.598 18:04:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.598 18:04:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.598 18:04:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.598 18:04:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:40.598 18:04:41 thread -- scripts/common.sh@345 -- # : 1 00:05:40.598 18:04:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.598 18:04:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.598 18:04:41 thread -- scripts/common.sh@365 -- # decimal 1 00:05:40.598 18:04:41 thread -- scripts/common.sh@353 -- # local d=1 00:05:40.598 18:04:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.598 18:04:41 thread -- scripts/common.sh@355 -- # echo 1 00:05:40.598 18:04:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.598 18:04:41 thread -- scripts/common.sh@366 -- # decimal 2 00:05:40.598 18:04:41 thread -- scripts/common.sh@353 -- # local d=2 00:05:40.598 18:04:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.598 18:04:41 thread -- scripts/common.sh@355 -- # echo 2 00:05:40.598 18:04:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.598 18:04:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.598 18:04:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.598 18:04:41 thread -- scripts/common.sh@368 -- # return 0 00:05:40.598 18:04:41 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.598 18:04:41 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.598 --rc genhtml_branch_coverage=1 00:05:40.598 --rc genhtml_function_coverage=1 00:05:40.598 --rc genhtml_legend=1 00:05:40.598 --rc geninfo_all_blocks=1 00:05:40.598 --rc geninfo_unexecuted_blocks=1 00:05:40.598 00:05:40.598 ' 00:05:40.598 18:04:41 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.598 --rc genhtml_branch_coverage=1 00:05:40.598 --rc genhtml_function_coverage=1 00:05:40.598 --rc genhtml_legend=1 00:05:40.598 --rc geninfo_all_blocks=1 00:05:40.598 --rc geninfo_unexecuted_blocks=1 00:05:40.598 00:05:40.598 ' 00:05:40.598 18:04:41 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.599 --rc genhtml_branch_coverage=1 00:05:40.599 --rc genhtml_function_coverage=1 00:05:40.599 --rc genhtml_legend=1 00:05:40.599 --rc geninfo_all_blocks=1 00:05:40.599 --rc geninfo_unexecuted_blocks=1 00:05:40.599 00:05:40.599 ' 00:05:40.599 18:04:41 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.599 --rc genhtml_branch_coverage=1 00:05:40.599 --rc genhtml_function_coverage=1 00:05:40.599 --rc genhtml_legend=1 00:05:40.599 --rc geninfo_all_blocks=1 00:05:40.599 --rc geninfo_unexecuted_blocks=1 00:05:40.599 00:05:40.599 ' 00:05:40.599 18:04:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.599 18:04:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:40.599 18:04:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.599 18:04:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.599 ************************************ 00:05:40.599 START TEST thread_poller_perf 00:05:40.599 ************************************ 00:05:40.599 18:04:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:40.599 [2024-11-19 18:04:41.984416] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:40.599 [2024-11-19 18:04:41.984519] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763360 ] 00:05:40.860 [2024-11-19 18:04:42.072252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.860 [2024-11-19 18:04:42.111785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.860 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:41.802 [2024-11-19T17:04:43.273Z] ====================================== 00:05:41.802 [2024-11-19T17:04:43.273Z] busy:2406934788 (cyc) 00:05:41.802 [2024-11-19T17:04:43.273Z] total_run_count: 417000 00:05:41.802 [2024-11-19T17:04:43.273Z] tsc_hz: 2400000000 (cyc) 00:05:41.802 [2024-11-19T17:04:43.273Z] ====================================== 00:05:41.802 [2024-11-19T17:04:43.273Z] poller_cost: 5772 (cyc), 2405 (nsec) 00:05:41.802 00:05:41.802 real 0m1.183s 00:05:41.802 user 0m1.102s 00:05:41.802 sys 0m0.076s 00:05:41.802 18:04:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.802 18:04:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.803 ************************************ 00:05:41.803 END TEST thread_poller_perf 00:05:41.803 ************************************ 00:05:41.803 18:04:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.803 18:04:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:41.803 18:04:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.803 18:04:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.803 ************************************ 00:05:41.803 START TEST thread_poller_perf 00:05:41.803 ************************************ 00:05:41.803 18:04:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:41.803 [2024-11-19 18:04:43.244844] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:41.803 [2024-11-19 18:04:43.244950] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763712 ] 00:05:42.063 [2024-11-19 18:04:43.331147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.063 [2024-11-19 18:04:43.366053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.063 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:43.006 [2024-11-19T17:04:44.477Z] ====================================== 00:05:43.006 [2024-11-19T17:04:44.477Z] busy:2401487250 (cyc) 00:05:43.006 [2024-11-19T17:04:44.477Z] total_run_count: 5564000 00:05:43.006 [2024-11-19T17:04:44.477Z] tsc_hz: 2400000000 (cyc) 00:05:43.006 [2024-11-19T17:04:44.477Z] ====================================== 00:05:43.006 [2024-11-19T17:04:44.477Z] poller_cost: 431 (cyc), 179 (nsec) 00:05:43.006 00:05:43.006 real 0m1.169s 00:05:43.006 user 0m1.089s 00:05:43.006 sys 0m0.077s 00:05:43.006 18:04:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.006 18:04:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.006 ************************************ 00:05:43.006 END TEST thread_poller_perf 00:05:43.006 ************************************ 00:05:43.006 18:04:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:43.006 00:05:43.007 real 0m2.707s 00:05:43.007 user 0m2.362s 00:05:43.007 sys 0m0.358s 00:05:43.007 18:04:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.007 18:04:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.007 ************************************ 00:05:43.007 END TEST thread 00:05:43.007 ************************************ 00:05:43.007 18:04:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:43.007 18:04:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:43.007 18:04:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.007 18:04:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.007 18:04:44 -- common/autotest_common.sh@10 -- # set +x 00:05:43.268 ************************************ 00:05:43.268 START TEST app_cmdline 00:05:43.268 ************************************ 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:43.269 * Looking for test storage... 00:05:43.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.269 18:04:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.269 --rc genhtml_branch_coverage=1 00:05:43.269 --rc genhtml_function_coverage=1 00:05:43.269 --rc genhtml_legend=1 00:05:43.269 --rc geninfo_all_blocks=1 00:05:43.269 --rc geninfo_unexecuted_blocks=1 00:05:43.269 00:05:43.269 ' 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.269 --rc genhtml_branch_coverage=1 00:05:43.269 --rc genhtml_function_coverage=1 00:05:43.269 --rc genhtml_legend=1 00:05:43.269 --rc geninfo_all_blocks=1 00:05:43.269 --rc geninfo_unexecuted_blocks=1 00:05:43.269 00:05:43.269 ' 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.269 --rc genhtml_branch_coverage=1 00:05:43.269 --rc genhtml_function_coverage=1 00:05:43.269 --rc genhtml_legend=1 00:05:43.269 --rc geninfo_all_blocks=1 00:05:43.269 --rc geninfo_unexecuted_blocks=1 00:05:43.269 00:05:43.269 ' 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.269 --rc genhtml_branch_coverage=1 00:05:43.269 --rc genhtml_function_coverage=1 00:05:43.269 --rc genhtml_legend=1 00:05:43.269 --rc geninfo_all_blocks=1 00:05:43.269 --rc geninfo_unexecuted_blocks=1 00:05:43.269 00:05:43.269 ' 00:05:43.269 18:04:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:43.269 18:04:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1764105 00:05:43.269 18:04:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1764105 00:05:43.269 18:04:44 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1764105 ']' 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.269 18:04:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.530 [2024-11-19 18:04:44.771048] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:43.530 [2024-11-19 18:04:44.771103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764105 ] 00:05:43.530 [2024-11-19 18:04:44.857175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.530 [2024-11-19 18:04:44.890344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:44.474 { 00:05:44.474 "version": "SPDK v25.01-pre git sha1 8d982eda9", 00:05:44.474 "fields": { 00:05:44.474 "major": 25, 00:05:44.474 "minor": 1, 00:05:44.474 "patch": 0, 00:05:44.474 "suffix": "-pre", 00:05:44.474 "commit": "8d982eda9" 00:05:44.474 } 00:05:44.474 } 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:44.474 18:04:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:44.474 18:04:45 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.736 request: 00:05:44.736 { 00:05:44.736 "method": "env_dpdk_get_mem_stats", 00:05:44.736 "req_id": 1 00:05:44.736 } 00:05:44.736 Got JSON-RPC error response 00:05:44.736 response: 00:05:44.736 { 00:05:44.736 "code": -32601, 00:05:44.736 "message": "Method not found" 00:05:44.736 } 00:05:44.736 18:04:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:44.736 18:04:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.736 18:04:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.736 18:04:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.736 18:04:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1764105 00:05:44.736 18:04:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1764105 ']' 00:05:44.736 18:04:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1764105 00:05:44.736 18:04:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:44.736 18:04:46 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.736 18:04:46 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1764105 00:05:44.736 18:04:46 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.736 18:04:46 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.736 18:04:46 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1764105' 00:05:44.736 killing process with pid 1764105 00:05:44.736 18:04:46 app_cmdline -- common/autotest_common.sh@973 -- # kill 1764105 00:05:44.736 18:04:46 app_cmdline -- common/autotest_common.sh@978 -- # wait 1764105 00:05:44.996 00:05:44.996 real 0m1.737s 00:05:44.996 user 0m2.105s 00:05:44.996 sys 0m0.457s 00:05:44.996 18:04:46 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.996 18:04:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.996 ************************************ 00:05:44.996 END TEST app_cmdline 00:05:44.996 ************************************ 00:05:44.996 18:04:46 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:44.996 18:04:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.996 18:04:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.996 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:44.996 ************************************ 00:05:44.996 START TEST version 00:05:44.996 ************************************ 00:05:44.996 18:04:46 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:44.996 * Looking for test storage... 00:05:44.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:44.996 18:04:46 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.996 18:04:46 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.996 18:04:46 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.258 18:04:46 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.258 18:04:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.258 18:04:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.258 18:04:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.258 18:04:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.258 18:04:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.258 18:04:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.258 18:04:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.258 18:04:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.258 18:04:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.258 18:04:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.258 18:04:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.258 18:04:46 version -- scripts/common.sh@344 -- # case "$op" in 00:05:45.258 18:04:46 version -- scripts/common.sh@345 -- # : 1 00:05:45.258 18:04:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.258 18:04:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.258 18:04:46 version -- scripts/common.sh@365 -- # decimal 1 00:05:45.258 18:04:46 version -- scripts/common.sh@353 -- # local d=1 00:05:45.258 18:04:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.258 18:04:46 version -- scripts/common.sh@355 -- # echo 1 00:05:45.258 18:04:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.258 18:04:46 version -- scripts/common.sh@366 -- # decimal 2 00:05:45.258 18:04:46 version -- scripts/common.sh@353 -- # local d=2 00:05:45.258 18:04:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.258 18:04:46 version -- scripts/common.sh@355 -- # echo 2 00:05:45.258 18:04:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.258 18:04:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.258 18:04:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.258 18:04:46 version -- scripts/common.sh@368 -- # return 0 00:05:45.258 18:04:46 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.258 18:04:46 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.258 --rc genhtml_branch_coverage=1 00:05:45.258 --rc genhtml_function_coverage=1 00:05:45.258 --rc genhtml_legend=1 00:05:45.258 --rc geninfo_all_blocks=1 00:05:45.258 --rc geninfo_unexecuted_blocks=1 00:05:45.258 00:05:45.258 ' 00:05:45.258 18:04:46 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.258 --rc genhtml_branch_coverage=1 00:05:45.258 --rc genhtml_function_coverage=1 00:05:45.258 --rc genhtml_legend=1 00:05:45.258 --rc geninfo_all_blocks=1 00:05:45.258 --rc geninfo_unexecuted_blocks=1 00:05:45.258 00:05:45.258 ' 00:05:45.258 18:04:46 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.258 --rc genhtml_branch_coverage=1 00:05:45.258 --rc genhtml_function_coverage=1 00:05:45.258 --rc genhtml_legend=1 00:05:45.258 --rc geninfo_all_blocks=1 00:05:45.258 --rc geninfo_unexecuted_blocks=1 00:05:45.258 00:05:45.258 ' 00:05:45.258 18:04:46 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.258 --rc genhtml_branch_coverage=1 00:05:45.258 --rc genhtml_function_coverage=1 00:05:45.258 --rc genhtml_legend=1 00:05:45.258 --rc geninfo_all_blocks=1 00:05:45.258 --rc geninfo_unexecuted_blocks=1 00:05:45.258 00:05:45.258 ' 00:05:45.258 18:04:46 version -- app/version.sh@17 -- # get_header_version major 00:05:45.258 18:04:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.258 18:04:46 version -- app/version.sh@14 -- # cut -f2 00:05:45.258 18:04:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.258 18:04:46 version -- app/version.sh@17 -- # major=25 00:05:45.258 18:04:46 version -- app/version.sh@18 -- # get_header_version minor 00:05:45.258 18:04:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.258 18:04:46 version -- app/version.sh@14 -- # cut -f2 00:05:45.258 18:04:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.258 18:04:46 version -- app/version.sh@18 -- # minor=1 00:05:45.258 18:04:46 version -- app/version.sh@19 -- # get_header_version patch 00:05:45.258 18:04:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.258 18:04:46 version -- app/version.sh@14 -- # cut -f2 00:05:45.258 18:04:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.258 18:04:46 version -- app/version.sh@19 -- # patch=0 00:05:45.258 18:04:46 version -- app/version.sh@20 -- # get_header_version suffix 00:05:45.258 18:04:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:45.258 18:04:46 version -- app/version.sh@14 -- # cut -f2 00:05:45.258 18:04:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.258 18:04:46 version -- app/version.sh@20 -- # suffix=-pre 00:05:45.258 18:04:46 version -- app/version.sh@22 -- # version=25.1 00:05:45.258 18:04:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:45.258 18:04:46 version -- app/version.sh@28 -- # version=25.1rc0 00:05:45.258 18:04:46 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:45.258 18:04:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:45.258 18:04:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:45.258 18:04:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:45.258 00:05:45.258 real 0m0.281s 00:05:45.258 user 0m0.177s 00:05:45.258 sys 0m0.153s 00:05:45.258 18:04:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.258 18:04:46 version -- common/autotest_common.sh@10 -- # set +x 00:05:45.258 ************************************ 00:05:45.258 END TEST version 00:05:45.258 ************************************ 00:05:45.258 18:04:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:45.258 18:04:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:45.258 18:04:46 -- spdk/autotest.sh@194 -- # uname -s 00:05:45.258 18:04:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:45.258 18:04:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.258 18:04:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.258 18:04:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:45.258 18:04:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:45.258 18:04:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:45.258 18:04:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:45.258 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:45.258 18:04:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:45.258 18:04:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:45.258 18:04:46 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:45.258 18:04:46 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:45.258 18:04:46 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:45.258 18:04:46 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:45.258 18:04:46 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:45.258 18:04:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.258 18:04:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.258 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:45.519 ************************************ 00:05:45.519 START TEST nvmf_tcp 00:05:45.519 ************************************ 00:05:45.519 18:04:46 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:45.519 * Looking for test storage... 00:05:45.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:45.519 18:04:46 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.519 18:04:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.519 18:04:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.519 18:04:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.519 18:04:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:45.519 18:04:46 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.519 18:04:46 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.519 --rc genhtml_branch_coverage=1 00:05:45.519 --rc genhtml_function_coverage=1 00:05:45.519 --rc genhtml_legend=1 00:05:45.519 --rc geninfo_all_blocks=1 00:05:45.519 --rc geninfo_unexecuted_blocks=1 00:05:45.519 00:05:45.519 ' 00:05:45.519 18:04:46 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.519 --rc genhtml_branch_coverage=1 00:05:45.519 --rc genhtml_function_coverage=1 00:05:45.519 --rc genhtml_legend=1 00:05:45.519 --rc geninfo_all_blocks=1 00:05:45.520 --rc geninfo_unexecuted_blocks=1 00:05:45.520 00:05:45.520 ' 00:05:45.520 18:04:46 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.520 --rc genhtml_branch_coverage=1 00:05:45.520 --rc genhtml_function_coverage=1 00:05:45.520 --rc genhtml_legend=1 00:05:45.520 --rc geninfo_all_blocks=1 00:05:45.520 --rc geninfo_unexecuted_blocks=1 00:05:45.520 00:05:45.520 ' 00:05:45.520 18:04:46 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.520 --rc genhtml_branch_coverage=1 00:05:45.520 --rc genhtml_function_coverage=1 00:05:45.520 --rc genhtml_legend=1 00:05:45.520 --rc geninfo_all_blocks=1 00:05:45.520 --rc geninfo_unexecuted_blocks=1 00:05:45.520 00:05:45.520 ' 00:05:45.520 18:04:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:45.520 18:04:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:45.520 18:04:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:45.520 18:04:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.520 18:04:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.520 18:04:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.520 ************************************ 00:05:45.520 START TEST nvmf_target_core 00:05:45.520 ************************************ 00:05:45.520 18:04:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:45.781 * Looking for test storage... 00:05:45.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.781 --rc genhtml_branch_coverage=1 00:05:45.781 --rc genhtml_function_coverage=1 00:05:45.781 --rc genhtml_legend=1 00:05:45.781 --rc geninfo_all_blocks=1 00:05:45.781 --rc geninfo_unexecuted_blocks=1 00:05:45.781 00:05:45.781 ' 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.781 --rc genhtml_branch_coverage=1 00:05:45.781 --rc genhtml_function_coverage=1 00:05:45.781 --rc genhtml_legend=1 00:05:45.781 --rc geninfo_all_blocks=1 00:05:45.781 --rc geninfo_unexecuted_blocks=1 00:05:45.781 00:05:45.781 ' 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.781 --rc genhtml_branch_coverage=1 00:05:45.781 --rc genhtml_function_coverage=1 00:05:45.781 --rc genhtml_legend=1 00:05:45.781 --rc geninfo_all_blocks=1 00:05:45.781 --rc geninfo_unexecuted_blocks=1 00:05:45.781 00:05:45.781 ' 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.781 --rc genhtml_branch_coverage=1 00:05:45.781 --rc genhtml_function_coverage=1 00:05:45.781 --rc genhtml_legend=1 00:05:45.781 --rc geninfo_all_blocks=1 00:05:45.781 --rc geninfo_unexecuted_blocks=1 00:05:45.781 00:05:45.781 ' 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:45.781 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.782 18:04:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:46.044 ************************************ 00:05:46.044 START TEST nvmf_abort 00:05:46.044 ************************************ 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:46.044 * Looking for test storage... 00:05:46.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.044 --rc genhtml_branch_coverage=1 00:05:46.044 --rc genhtml_function_coverage=1 00:05:46.044 --rc genhtml_legend=1 00:05:46.044 --rc geninfo_all_blocks=1 00:05:46.044 --rc geninfo_unexecuted_blocks=1 00:05:46.044 00:05:46.044 ' 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.044 --rc genhtml_branch_coverage=1 00:05:46.044 --rc genhtml_function_coverage=1 00:05:46.044 --rc genhtml_legend=1 00:05:46.044 --rc geninfo_all_blocks=1 00:05:46.044 --rc geninfo_unexecuted_blocks=1 00:05:46.044 00:05:46.044 ' 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.044 --rc genhtml_branch_coverage=1 00:05:46.044 --rc genhtml_function_coverage=1 00:05:46.044 --rc genhtml_legend=1 00:05:46.044 --rc geninfo_all_blocks=1 00:05:46.044 --rc geninfo_unexecuted_blocks=1 00:05:46.044 00:05:46.044 ' 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.044 --rc genhtml_branch_coverage=1 00:05:46.044 --rc genhtml_function_coverage=1 00:05:46.044 --rc genhtml_legend=1 00:05:46.044 --rc geninfo_all_blocks=1 00:05:46.044 --rc geninfo_unexecuted_blocks=1 00:05:46.044 00:05:46.044 ' 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.044 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:46.045 18:04:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:54.186 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:54.187 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:54.187 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:54.187 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:54.187 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:54.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:54.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:05:54.187 00:05:54.187 --- 10.0.0.2 ping statistics --- 00:05:54.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.187 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:54.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:54.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:05:54.187 00:05:54.187 --- 10.0.0.1 ping statistics --- 00:05:54.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.187 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:54.187 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:54.188 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.188 18:04:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.188 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1768459 00:05:54.188 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1768459 00:05:54.188 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:54.188 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1768459 ']' 00:05:54.188 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.188 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.188 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.188 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.188 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.188 [2024-11-19 18:04:55.058969] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:05:54.188 [2024-11-19 18:04:55.059034] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:54.188 [2024-11-19 18:04:55.160832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.188 [2024-11-19 18:04:55.215577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:54.188 [2024-11-19 18:04:55.215633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:54.188 [2024-11-19 18:04:55.215643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.188 [2024-11-19 18:04:55.215650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.188 [2024-11-19 18:04:55.215656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:54.188 [2024-11-19 18:04:55.217725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.188 [2024-11-19 18:04:55.217887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.188 [2024-11-19 18:04:55.217889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.449 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.449 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:54.449 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:54.449 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.449 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.711 [2024-11-19 18:04:55.941088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.711 Malloc0 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.711 18:04:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.711 Delay0 00:05:54.711 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.711 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:54.711 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.711 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.711 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.711 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:54.711 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.711 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.712 [2024-11-19 18:04:56.035271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.712 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:54.973 [2024-11-19 18:04:56.186823] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:56.889 Initializing NVMe Controllers 00:05:56.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:56.889 controller IO queue size 128 less than required 00:05:56.889 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:56.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:56.889 Initialization complete. Launching workers. 00:05:56.889 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28433 00:05:56.889 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28494, failed to submit 62 00:05:56.889 success 28437, unsuccessful 57, failed 0 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:56.889 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:56.889 rmmod nvme_tcp 00:05:57.153 rmmod nvme_fabrics 00:05:57.153 rmmod nvme_keyring 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1768459 ']' 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1768459 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1768459 ']' 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1768459 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768459 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768459' 00:05:57.153 killing process with pid 1768459 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1768459 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1768459 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.153 18:04:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:59.703 00:05:59.703 real 0m13.428s 00:05:59.703 user 0m14.161s 00:05:59.703 sys 0m6.622s 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.703 ************************************ 00:05:59.703 END TEST nvmf_abort 00:05:59.703 ************************************ 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:59.703 ************************************ 00:05:59.703 START TEST nvmf_ns_hotplug_stress 00:05:59.703 ************************************ 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:59.703 * Looking for test storage... 00:05:59.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.703 --rc genhtml_branch_coverage=1 00:05:59.703 --rc genhtml_function_coverage=1 00:05:59.703 --rc genhtml_legend=1 00:05:59.703 --rc geninfo_all_blocks=1 00:05:59.703 --rc geninfo_unexecuted_blocks=1 00:05:59.703 00:05:59.703 ' 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.703 --rc genhtml_branch_coverage=1 00:05:59.703 --rc genhtml_function_coverage=1 00:05:59.703 --rc genhtml_legend=1 00:05:59.703 --rc geninfo_all_blocks=1 00:05:59.703 --rc geninfo_unexecuted_blocks=1 00:05:59.703 00:05:59.703 ' 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.703 --rc genhtml_branch_coverage=1 00:05:59.703 --rc genhtml_function_coverage=1 00:05:59.703 --rc genhtml_legend=1 00:05:59.703 --rc geninfo_all_blocks=1 00:05:59.703 --rc geninfo_unexecuted_blocks=1 00:05:59.703 00:05:59.703 ' 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.703 --rc genhtml_branch_coverage=1 00:05:59.703 --rc genhtml_function_coverage=1 00:05:59.703 --rc genhtml_legend=1 00:05:59.703 --rc geninfo_all_blocks=1 00:05:59.703 --rc geninfo_unexecuted_blocks=1 00:05:59.703 00:05:59.703 ' 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.703 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.704 18:05:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.704 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:59.704 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:59.704 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:59.704 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.975 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.975 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:07.975 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:07.975 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:07.975 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:07.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:07.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:07.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:07.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:07.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:07.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:06:07.976 00:06:07.976 --- 10.0.0.2 ping statistics --- 00:06:07.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.976 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:06:07.976 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:07.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:07.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:06:07.976 00:06:07.976 --- 10.0.0.1 ping statistics --- 00:06:07.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.977 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1773341 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1773341 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1773341 ']' 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.977 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.977 [2024-11-19 18:05:08.539060] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:06:07.977 [2024-11-19 18:05:08.539130] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.977 [2024-11-19 18:05:08.637150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.977 [2024-11-19 18:05:08.688780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:07.977 [2024-11-19 18:05:08.688834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:07.977 [2024-11-19 18:05:08.688843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:07.977 [2024-11-19 18:05:08.688851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:07.977 [2024-11-19 18:05:08.688857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:07.977 [2024-11-19 18:05:08.690713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.977 [2024-11-19 18:05:08.690874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.977 [2024-11-19 18:05:08.690875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.977 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.977 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:07.977 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:07.977 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.977 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.977 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:07.977 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:07.977 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:08.275 [2024-11-19 18:05:09.579701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.275 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:08.564 18:05:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:08.564 [2024-11-19 18:05:09.970771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:08.564 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:08.839 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:09.121 Malloc0 00:06:09.121 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:09.121 Delay0 00:06:09.407 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.407 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:09.694 NULL1 00:06:09.694 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:09.694 18:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1774046 00:06:10.006 18:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:10.006 18:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:10.006 18:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.002 Read completed with error (sct=0, sc=11) 00:06:11.002 18:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.263 18:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:11.263 18:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:11.263 true 00:06:11.525 18:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:11.525 18:05:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.358 18:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.358 18:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:12.358 18:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:12.618 true 00:06:12.618 18:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:12.618 18:05:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.879 18:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.879 18:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:12.879 18:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:13.141 true 00:06:13.141 18:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:13.141 18:05:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.526 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.526 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.526 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:14.526 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:14.787 true 00:06:14.787 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:14.787 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.730 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.730 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:15.730 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:15.730 true 00:06:15.989 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:15.989 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.989 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.249 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:16.249 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:16.510 true 00:06:16.510 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:16.510 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.893 18:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.893 18:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:17.893 18:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:17.893 true 00:06:17.893 18:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:17.893 18:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.833 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.093 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:19.093 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:19.093 true 00:06:19.093 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:19.093 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.354 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.614 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:19.614 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:19.614 true 00:06:19.614 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:19.614 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.874 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.135 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:20.135 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:20.135 true 00:06:20.394 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:20.394 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.394 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.654 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:20.654 18:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:20.915 true 00:06:20.915 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:20.915 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.300 18:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.300 18:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:22.300 18:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:22.300 true 00:06:22.300 18:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:22.300 18:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.242 18:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.502 18:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:23.502 18:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:23.502 true 00:06:23.502 18:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:23.502 18:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.763 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.024 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:24.024 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:24.024 true 00:06:24.024 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:24.024 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.287 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.547 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:24.547 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:24.547 true 00:06:24.809 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:24.809 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.809 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.069 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:25.069 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:25.330 true 00:06:25.330 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:25.330 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.330 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.590 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:25.590 18:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:25.850 true 00:06:25.850 18:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:25.850 18:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.850 18:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.110 18:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:26.110 18:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:26.371 true 00:06:26.371 18:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:26.371 18:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.751 18:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.751 18:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:27.751 18:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:27.751 true 00:06:28.010 18:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:28.010 18:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.840 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.840 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:28.840 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:29.102 true 00:06:29.102 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:29.102 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.363 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.363 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:29.363 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:29.624 true 00:06:29.624 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:29.624 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.894 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.894 [2024-11-19 18:05:31.316111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.316983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.317990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:29.894 [2024-11-19 18:05:31.318602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.318786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.319086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.319117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.894 [2024-11-19 18:05:31.319146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.319981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.320974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.321991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.895 [2024-11-19 18:05:31.322337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.322976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.323991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.324987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.325544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.326326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.326362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.326389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.326425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.896 [2024-11-19 18:05:31.326456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.326962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.327971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.328946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.897 [2024-11-19 18:05:31.329600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.329996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.330982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.331987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.898 [2024-11-19 18:05:31.332468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.332499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.332528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.332561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.332590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.332625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.332655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.332680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.333998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.334977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.335875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.899 [2024-11-19 18:05:31.336437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.336981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.337997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.338966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.900 [2024-11-19 18:05:31.339712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.339742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.339771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.339803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.339834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.339865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.339892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.339930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.339958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.339986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.340980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.341995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.342984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.901 [2024-11-19 18:05:31.343337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.343987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.344978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.345967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.902 [2024-11-19 18:05:31.346820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.346854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.346883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.346915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.346952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.346990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.347896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.348970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:29.903 [2024-11-19 18:05:31.349149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:29.903 [2024-11-19 18:05:31.349535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.903 [2024-11-19 18:05:31.349883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.349913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.349941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.349973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.350973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.351980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.352975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:29.904 [2024-11-19 18:05:31.353320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.353998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.354998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.355644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.194 [2024-11-19 18:05:31.355995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.194 [2024-11-19 18:05:31.356335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.356997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.357997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.358986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.359994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.195 [2024-11-19 18:05:31.360579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.360608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.360638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.360667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.360698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.360740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.360772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.360926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.360960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.360992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.361982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.362995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.196 [2024-11-19 18:05:31.363892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.363923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.363960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.363993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.364970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.365982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.366992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.367838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.368228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.368262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.197 [2024-11-19 18:05:31.368291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.368993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.369966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.370994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.198 [2024-11-19 18:05:31.371884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.371916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.371948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.371981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.372889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.373983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.374995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.375909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.376051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.376083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.376117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.376147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.376184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.199 [2024-11-19 18:05:31.376215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.376984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.377977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.378968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.379972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.380661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.200 [2024-11-19 18:05:31.381119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.381976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.382977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.383980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.384841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.201 [2024-11-19 18:05:31.385773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.385802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.385833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.385866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.385899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.385930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.385974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.386979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.387977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.388977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.202 [2024-11-19 18:05:31.389693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.389735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.389763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.389791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.389822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.389852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.390978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.391987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.392716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.203 [2024-11-19 18:05:31.392745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.393982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.203 [2024-11-19 18:05:31.394627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.394983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.395997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.396995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.397986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.398994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.204 [2024-11-19 18:05:31.399391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.399973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.400973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.401993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.402983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.205 [2024-11-19 18:05:31.403858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.403889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.403926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.403957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.403987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.404975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.405989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.406965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.407976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.408972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.409002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.409033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.409066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.409113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.206 [2024-11-19 18:05:31.409145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.409918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.410980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.411982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.412988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.413978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.414116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.414146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.414182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.414214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.207 [2024-11-19 18:05:31.414246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.414642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.415970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.416983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.417978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.418813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.419213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.208 [2024-11-19 18:05:31.419250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.419996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.420983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.421955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.422984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.423972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.424003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.424035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.424063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.424095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.424126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.424163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.424197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.424225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.209 [2024-11-19 18:05:31.424256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.424980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.425976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.426957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.427992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.428989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.429963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.430000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.430035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.430067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.430097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.430131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.210 [2024-11-19 18:05:31.430175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.430992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.431987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.211 [2024-11-19 18:05:31.432166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.432969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.433976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.434974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.211 [2024-11-19 18:05:31.435317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.435875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.436548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.437974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.438973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.439969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.212 [2024-11-19 18:05:31.440460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.440973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.441976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.442990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.443989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.444981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.445992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.213 [2024-11-19 18:05:31.446607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.446993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.447987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.448729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.449983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.450993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.451985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.452015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.452048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.452110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.452140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.452176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.214 [2024-11-19 18:05:31.452206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.452974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.453764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.454976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.455983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.456987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.215 [2024-11-19 18:05:31.457830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.457863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.457893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.457926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.457963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.457998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.458546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.459973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.460987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.461839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.462993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.463980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.464010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.464040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.464092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.464123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.216 [2024-11-19 18:05:31.464155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.464978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.465993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.466980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.467988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.468950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.217 [2024-11-19 18:05:31.469514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.217 [2024-11-19 18:05:31.469896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.469940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.469972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.469999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.470976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.471977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.472992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.473890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.474988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.475995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.476027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.476066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.476101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.218 [2024-11-19 18:05:31.476141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.476994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.477971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.478979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.479986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.480978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.481967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.219 [2024-11-19 18:05:31.482523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.482982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.483971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.484629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.485998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.486974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.487990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.220 [2024-11-19 18:05:31.488539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.488979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.489995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.490981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.491974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.492002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.492039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.492077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.492112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.492142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.492899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.492934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.492968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.492998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.493999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.494968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.221 [2024-11-19 18:05:31.495436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.495972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.496677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.497969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.498979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.499681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.500985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.222 [2024-11-19 18:05:31.501958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.501988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.502987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.503999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.504992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.505970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.506993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.507973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.508005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.508033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.508069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.508100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.508133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.223 [2024-11-19 18:05:31.508171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 true 00:06:30.224 [2024-11-19 18:05:31.508805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.508999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.224 [2024-11-19 18:05:31.509575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.509972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.510976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.511559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.512982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.513998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.514027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.514057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.514086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.514118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.514149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.224 [2024-11-19 18:05:31.514185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.514987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.515998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.516975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.517020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.517060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.517096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.225 [2024-11-19 18:05:31.517122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.517969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.518753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.519969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.520000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.520030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.520060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.520094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.520126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.520163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.520196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.226 [2024-11-19 18:05:31.520227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.520984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.521965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.522969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.227 [2024-11-19 18:05:31.523423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.523999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.524984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.525980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.228 [2024-11-19 18:05:31.526548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.526579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.526624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.526975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.527983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.528924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.529972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.530007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.530039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.530074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.530107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.530135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.229 [2024-11-19 18:05:31.530171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.530986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.531973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.230 [2024-11-19 18:05:31.532616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.532979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.533808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.534994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:30.231 [2024-11-19 18:05:31.535024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.231 [2024-11-19 18:05:31.535412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.232 [2024-11-19 18:05:31.535440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.535969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.536990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.537986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.538018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.538050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.232 [2024-11-19 18:05:31.538079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.538850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.539997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.233 [2024-11-19 18:05:31.540937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.540970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.541980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.542994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.543927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.544071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.544108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.544142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.544175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.544210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.234 [2024-11-19 18:05:31.544235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.544994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.545981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.546015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.546044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.546083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.546111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.235 [2024-11-19 18:05:31.546886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.546917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.546950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.546983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.547020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.547054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.547086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.547117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.547150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.547186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.547220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.235 [2024-11-19 18:05:31.547252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.547982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.548938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.236 [2024-11-19 18:05:31.549780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.549809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.549847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.549875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.549905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.549936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.549968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.550998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.551999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.552031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.237 [2024-11-19 18:05:31.552062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.552970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.553991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.238 [2024-11-19 18:05:31.554828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.554856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.554889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.554921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.554951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.554982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.555999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.556994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.557970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.558011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.558047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.558078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.558118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.558149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.558183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.239 [2024-11-19 18:05:31.558214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.558968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.559973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.560991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.240 [2024-11-19 18:05:31.561663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.561694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.561897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.561931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.561962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.561992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.562999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.563943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.564996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.241 [2024-11-19 18:05:31.565363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.565974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.566971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.567987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.242 [2024-11-19 18:05:31.568311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.568698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.569996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.570952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.571994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.572030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.572062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.572092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.572119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.243 [2024-11-19 18:05:31.572149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.572983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.573998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.574968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.244 [2024-11-19 18:05:31.575422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.575909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.576977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.577974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.245 [2024-11-19 18:05:31.578564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.578596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.578630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.578779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.578816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.578847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.578879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.578923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.578954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.578985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.579983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.580824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.246 [2024-11-19 18:05:31.581656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.581997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.582994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.583981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.247 [2024-11-19 18:05:31.584518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.584988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.585972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.248 [2024-11-19 18:05:31.586066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.586996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.587028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.587059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.587088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.587116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.587148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.248 [2024-11-19 18:05:31.587185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.587998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.588878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.589990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.249 [2024-11-19 18:05:31.590367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.590399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.590433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.590464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.590864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.590895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.590928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.590956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.590988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.591993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.592934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.250 [2024-11-19 18:05:31.593522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.593554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.593585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.594972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.595997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.251 [2024-11-19 18:05:31.596719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.596754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.596787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.596818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.596849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.596883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.596916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.596946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.596976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.597800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.598984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.252 [2024-11-19 18:05:31.599631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.599981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.600947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.601997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.602029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.602063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.253 [2024-11-19 18:05:31.602096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.602989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.603977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.254 [2024-11-19 18:05:31.604558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.604986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.605998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.606999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.607030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.607063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.607095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.607130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.607169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.607203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.607231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.255 [2024-11-19 18:05:31.607259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.607970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.608999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.609992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.256 [2024-11-19 18:05:31.610438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.610998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.611969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.612977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.257 [2024-11-19 18:05:31.613387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.613981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.614988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.615989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.258 [2024-11-19 18:05:31.616489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.616971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.617977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.259 [2024-11-19 18:05:31.618798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.618831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.618882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.618910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.618944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.618981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.619759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.620994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.260 [2024-11-19 18:05:31.621916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.621944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.621973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.622923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.261 [2024-11-19 18:05:31.622960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.623977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.624951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.261 [2024-11-19 18:05:31.625362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.625966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.626977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.627991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.628019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.628048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.628084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.628118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.628147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.628187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.628216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.262 [2024-11-19 18:05:31.628254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.628970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.629986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.630975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.263 [2024-11-19 18:05:31.631455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.631968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.632979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.633976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.264 [2024-11-19 18:05:31.634003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.265 [2024-11-19 18:05:31.634888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.635991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.636999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.637971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.638983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.639981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.640178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.640210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.640238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.640266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.640296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.640332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.640364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.552 [2024-11-19 18:05:31.640398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.640996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.641971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.642985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.643991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.644991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.645987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.646982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.553 [2024-11-19 18:05:31.647337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.647972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.648963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.649989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.650974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.651996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.554 [2024-11-19 18:05:31.652484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.652981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.653957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.654991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.655973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.656996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.657984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.658018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.658059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.555 [2024-11-19 18:05:31.658091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.658419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.659990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.660995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.556 [2024-11-19 18:05:31.661764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.661955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.662860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.663971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.556 [2024-11-19 18:05:31.664795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.664830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.664859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.664894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.664934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.664964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.664995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.665950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.666990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.667990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.668965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.669977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.670706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.671069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.671105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.671137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.671173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.671204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.671234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.671271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.671300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.557 [2024-11-19 18:05:31.671331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.671975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.672976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.673995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.674974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.675977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.676976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.558 [2024-11-19 18:05:31.677928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.678969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.679976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.680982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.681981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.682993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.683975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.559 [2024-11-19 18:05:31.684909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.684942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.684975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.685823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.686975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.687994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.688810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.689976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.690969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.691002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.560 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.560 [2024-11-19 18:05:31.884164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.884211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.884241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.884269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.884305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.884334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.884369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.884400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.884436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.560 [2024-11-19 18:05:31.884466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.884974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.885990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.886997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.887968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.888980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.889997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.561 [2024-11-19 18:05:31.890552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.890993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.891970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.892980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.893989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.894983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.895997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.562 [2024-11-19 18:05:31.896717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.896744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.896778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.896808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.896839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.896867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.896916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.896946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.896978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.897990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.898026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.898067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.898096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.898125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.898155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.898184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.898946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.898979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.563 [2024-11-19 18:05:31.899093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.899980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.900990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.901992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.902974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.563 [2024-11-19 18:05:31.903824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.903856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.903893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.903921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.903948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.903982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.904968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.905571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.906991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.907997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.908986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.564 [2024-11-19 18:05:31.909826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.909855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.909886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.909917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.909946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.909978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.910974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.911994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.912975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.913970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.914986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.915989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.565 [2024-11-19 18:05:31.916524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.916991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.917860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:30.566 [2024-11-19 18:05:31.918425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:30.566 [2024-11-19 18:05:31.918761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.918976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.919995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.920989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.921973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.922008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.922042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.922793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.922833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.922863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.922893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.922928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.922957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.922989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.566 [2024-11-19 18:05:31.923544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.923995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.924990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.925991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.926997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.927989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.928966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.567 [2024-11-19 18:05:31.929508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.929971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.930987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.931974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.932991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.933973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.934972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.935988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.568 [2024-11-19 18:05:31.936555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.936924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.569 [2024-11-19 18:05:31.937528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.937586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.938967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.939998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.940998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.941999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.942973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.943003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.943035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.943069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.943098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.943133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.943167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.569 [2024-11-19 18:05:31.943199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.943997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.944941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.945984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.946993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.947968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.948938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.570 [2024-11-19 18:05:31.949535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.949568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.949599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.950972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.951973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.952991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.953971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.954998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.955989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.571 [2024-11-19 18:05:31.956527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.956930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.957998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.958968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.959986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.960992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.961992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.572 [2024-11-19 18:05:31.962707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.962740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.962770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.962804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.962831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.962861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.962895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.962921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.962956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.962985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.963864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.964979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.965986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.966980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.967983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.968727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.573 [2024-11-19 18:05:31.969745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.969776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.969806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.969840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.969870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.969898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.969928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.969961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.969993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.970998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.971997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.972981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.973995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.574 [2024-11-19 18:05:31.974884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.974949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.975968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.574 [2024-11-19 18:05:31.976004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.976998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.977972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.978968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.979996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.980994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.981977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.982012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.982045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.982074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.982107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.982136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.982173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.575 [2024-11-19 18:05:31.982208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.982660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.982725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.982756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.982788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.982822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.982853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.982905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.982934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.982972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.983974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.984968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.985946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.986993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.576 [2024-11-19 18:05:31.987734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.987764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.987795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.987826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.988968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.989988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.990991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.991022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.991054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.991084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.991119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.991150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.991187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.991221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.577 [2024-11-19 18:05:31.991253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.991998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.992994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.993978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.578 [2024-11-19 18:05:31.994876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.994911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.994949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.994984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.995990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.996999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.997979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.579 [2024-11-19 18:05:31.998426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.998451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.998936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.998969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.998996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:31.999978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.000946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.001083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.001117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.001149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.001186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.001217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.865 [2024-11-19 18:05:32.001252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.001981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.002940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.003989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.866 [2024-11-19 18:05:32.004398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.004981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.005775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.006989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.007027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.007060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.007090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.007128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.007163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.007197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.007232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.007264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.867 [2024-11-19 18:05:32.007294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.007980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.008980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.009997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.868 [2024-11-19 18:05:32.010445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.010985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.011989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.869 [2024-11-19 18:05:32.012598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.012986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.013529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.013561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.013594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.013624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.869 [2024-11-19 18:05:32.013654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.013980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.014971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.870 [2024-11-19 18:05:32.015945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.015976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.016810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.017995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.018992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.871 [2024-11-19 18:05:32.019022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.019978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.020998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.021960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.872 [2024-11-19 18:05:32.022652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.022986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.023980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.024984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.025971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.026001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.873 [2024-11-19 18:05:32.026046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.026993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.027990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.028984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.874 [2024-11-19 18:05:32.029312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.029983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.030979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.875 [2024-11-19 18:05:32.031984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.032996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.033988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.034973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.876 [2024-11-19 18:05:32.035597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.035968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.036997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.037983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.877 [2024-11-19 18:05:32.038522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.038997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.039891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.040774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.041984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.042013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.042043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.042074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.042102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.042141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.042176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.042209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.878 [2024-11-19 18:05:32.042244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.042996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.043973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.044677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.879 [2024-11-19 18:05:32.045507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.045995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.046966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.880 [2024-11-19 18:05:32.047994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.048977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.049987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.050970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.881 [2024-11-19 18:05:32.051183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.881 [2024-11-19 18:05:32.051570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.051995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.052985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.053997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.054978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.055011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.055037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.055073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.055103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.055133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.055179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.055214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.882 [2024-11-19 18:05:32.055248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.055978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.056978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.057988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.883 [2024-11-19 18:05:32.058999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.059951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.060985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.061987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.884 [2024-11-19 18:05:32.062951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.062985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.063997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.064989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.065974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.066004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.066035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.066072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.066103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.066132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.066167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.066200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.066230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.885 [2024-11-19 18:05:32.066260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.066984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.067968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.068985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.886 [2024-11-19 18:05:32.069563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.069901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.070975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.071984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.072764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.887 [2024-11-19 18:05:32.073681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.073713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.073747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.073784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.073812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.073840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.073872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.073908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.073941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.073979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 true 00:06:30.888 [2024-11-19 18:05:32.074513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.074976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.075997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.888 [2024-11-19 18:05:32.076984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.077972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.078980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.079981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.080972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.889 [2024-11-19 18:05:32.081004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.081999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.082906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.083990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.084020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.084050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.084081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.084112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.084142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.084178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.084209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.890 [2024-11-19 18:05:32.084243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.084983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.085992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.086975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.087003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.087031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.891 [2024-11-19 18:05:32.087064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.087999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.088965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.892 [2024-11-19 18:05:32.089375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.089971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.892 [2024-11-19 18:05:32.090544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.090998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.091969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.092973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.893 [2024-11-19 18:05:32.093664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.093696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.093731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.093763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.093793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.094976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.095966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.096971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.894 [2024-11-19 18:05:32.097004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.097998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.098986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.895 [2024-11-19 18:05:32.099905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.099938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.099965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.099994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.100990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:30.896 [2024-11-19 18:05:32.101533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.896 [2024-11-19 18:05:32.101955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.101986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.896 [2024-11-19 18:05:32.102717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.102751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.102791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.102826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.102857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.102891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.102920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.102956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.102985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.103998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.104972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105001] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.105999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.897 [2024-11-19 18:05:32.106672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.106995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.107998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.108997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.898 [2024-11-19 18:05:32.109856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.109896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.109927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.109960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.109990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.110983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.111963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112000] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.112994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.899 [2024-11-19 18:05:32.113378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.113990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.114996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.115782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.900 [2024-11-19 18:05:32.116639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.116973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.117980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.118616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.119961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.120002] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.901 [2024-11-19 18:05:32.120032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.120993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.121981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.122993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.902 [2024-11-19 18:05:32.123750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.123782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.123812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.123843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.123883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.123917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.123945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.123972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.124994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.125988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.126974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:30.903 [2024-11-19 18:05:32.127034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.903 [2024-11-19 18:05:32.127365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.127976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.128995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.129984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.904 [2024-11-19 18:05:32.130499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.130978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.131017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.131046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.131079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.131115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.131145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.131180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.131206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.131236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:30.905 [2024-11-19 18:05:32.131270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:31.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.846 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.846 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.107 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:32.107 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:32.107 true 00:06:32.107 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:32.107 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.047 18:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.308 18:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:33.308 18:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:33.308 true 00:06:33.308 18:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:33.308 18:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.567 18:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.828 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:33.829 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:33.829 true 00:06:33.829 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:33.829 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.211 18:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.211 18:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:35.211 18:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:35.472 true 00:06:35.472 18:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:35.472 18:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.413 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.413 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:36.413 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:36.673 true 00:06:36.673 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:36.673 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.673 18:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.933 18:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:36.933 18:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:37.193 true 00:06:37.193 18:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:37.193 18:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.575 18:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.575 18:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:38.575 18:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:38.575 true 00:06:38.575 18:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:38.575 18:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.515 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.515 18:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.776 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:39.776 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:39.776 true 00:06:39.776 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:39.776 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.036 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.036 Initializing NVMe Controllers 00:06:40.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:40.036 Controller IO queue size 128, less than required. 00:06:40.036 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:40.036 Controller IO queue size 128, less than required. 00:06:40.036 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:40.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:40.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:40.037 Initialization complete. Launching workers. 00:06:40.037 ======================================================== 00:06:40.037 Latency(us) 00:06:40.037 Device Information : IOPS MiB/s Average min max 00:06:40.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3020.20 1.47 27066.45 1578.41 1110971.76 00:06:40.037 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18850.17 9.20 6790.05 1180.90 403524.69 00:06:40.037 ======================================================== 00:06:40.037 Total : 21870.37 10.68 9590.13 1180.90 1110971.76 00:06:40.037 00:06:40.297 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:40.297 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:40.297 true 00:06:40.297 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1774046 00:06:40.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1774046) - No such process 00:06:40.297 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1774046 00:06:40.297 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.557 18:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.818 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:40.818 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:40.818 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:40.818 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.818 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:40.818 null0 00:06:40.818 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:40.818 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:40.818 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:41.078 null1 00:06:41.078 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.078 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.078 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:41.339 null2 00:06:41.339 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.339 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.339 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:41.339 null3 00:06:41.599 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.599 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.599 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:41.599 null4 00:06:41.599 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.599 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.599 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:41.860 null5 00:06:41.860 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:41.860 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:41.860 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:42.120 null6 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:42.120 null7 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.120 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1780536 1780537 1780539 1780541 1780544 1780546 1780547 1780550 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.121 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.381 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.381 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.381 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.381 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.381 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.381 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.381 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.381 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.640 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.640 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.640 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.640 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.640 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.640 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.640 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.900 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.186 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.445 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.446 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.706 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.706 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.706 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.706 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.706 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.706 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.967 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.226 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.226 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.226 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.227 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.487 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.488 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.749 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.749 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.749 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.749 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.749 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.749 18:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.749 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.011 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.272 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.533 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.534 18:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.795 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.056 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.317 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.317 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.317 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:46.317 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:46.317 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:46.318 rmmod nvme_tcp 00:06:46.318 rmmod nvme_fabrics 00:06:46.318 rmmod nvme_keyring 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1773341 ']' 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1773341 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1773341 ']' 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1773341 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1773341 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1773341' 00:06:46.318 killing process with pid 1773341 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1773341 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1773341 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.318 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.579 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.580 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.580 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.580 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.580 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.492 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:48.492 00:06:48.492 real 0m49.095s 00:06:48.492 user 3m14.346s 00:06:48.492 sys 0m17.401s 00:06:48.492 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.492 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:48.492 ************************************ 00:06:48.492 END TEST nvmf_ns_hotplug_stress 00:06:48.492 ************************************ 00:06:48.492 18:05:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:48.492 18:05:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.492 18:05:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.492 18:05:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.492 ************************************ 00:06:48.492 START TEST nvmf_delete_subsystem 00:06:48.492 ************************************ 00:06:48.492 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:48.754 * Looking for test storage... 00:06:48.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:48.754 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.755 --rc genhtml_branch_coverage=1 00:06:48.755 --rc genhtml_function_coverage=1 00:06:48.755 --rc genhtml_legend=1 00:06:48.755 --rc geninfo_all_blocks=1 00:06:48.755 --rc geninfo_unexecuted_blocks=1 00:06:48.755 00:06:48.755 ' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.755 --rc genhtml_branch_coverage=1 00:06:48.755 --rc genhtml_function_coverage=1 00:06:48.755 --rc genhtml_legend=1 00:06:48.755 --rc geninfo_all_blocks=1 00:06:48.755 --rc geninfo_unexecuted_blocks=1 00:06:48.755 00:06:48.755 ' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.755 --rc genhtml_branch_coverage=1 00:06:48.755 --rc genhtml_function_coverage=1 00:06:48.755 --rc genhtml_legend=1 00:06:48.755 --rc geninfo_all_blocks=1 00:06:48.755 --rc geninfo_unexecuted_blocks=1 00:06:48.755 00:06:48.755 ' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.755 --rc genhtml_branch_coverage=1 00:06:48.755 --rc genhtml_function_coverage=1 00:06:48.755 --rc genhtml_legend=1 00:06:48.755 --rc geninfo_all_blocks=1 00:06:48.755 --rc geninfo_unexecuted_blocks=1 00:06:48.755 00:06:48.755 ' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:48.755 18:05:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:56.895 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:56.896 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:56.896 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:56.896 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:56.896 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:56.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:06:56.896 00:06:56.896 --- 10.0.0.2 ping statistics --- 00:06:56.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.896 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:06:56.896 00:06:56.896 --- 10.0.0.1 ping statistics --- 00:06:56.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.896 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1785717 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1785717 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1785717 ']' 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.896 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.896 [2024-11-19 18:05:57.703671] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:06:56.896 [2024-11-19 18:05:57.703735] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.896 [2024-11-19 18:05:57.804532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.896 [2024-11-19 18:05:57.856089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.896 [2024-11-19 18:05:57.856144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.896 [2024-11-19 18:05:57.856153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.896 [2024-11-19 18:05:57.856170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.896 [2024-11-19 18:05:57.856176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.896 [2024-11-19 18:05:57.857893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.896 [2024-11-19 18:05:57.857894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 [2024-11-19 18:05:58.576752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 [2024-11-19 18:05:58.601050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.157 NULL1 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.157 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.419 Delay0 00:06:57.419 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.419 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.419 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.419 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.419 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.419 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1786009 00:06:57.419 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:57.419 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:57.419 [2024-11-19 18:05:58.728064] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:59.331 18:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:59.331 18:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.331 18:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 starting I/O failed: -6 00:06:59.593 [2024-11-19 18:06:00.814632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec1680 is same with the state(6) to be set 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Write completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.593 [2024-11-19 18:06:00.815185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec12c0 is same with the state(6) to be set 00:06:59.593 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 starting I/O failed: -6 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 [2024-11-19 18:06:00.817954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd6a4000c40 is same with the state(6) to be set 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Read completed with error (sct=0, sc=8) 00:06:59.594 Write completed with error (sct=0, sc=8) 00:06:59.594 [2024-11-19 18:06:00.818463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd6a400d350 is same with the state(6) to be set 00:07:00.537 [2024-11-19 18:06:01.784255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec29a0 is same with the state(6) to be set 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Write completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Write completed with error (sct=0, sc=8) 00:07:00.537 Write completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Write completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Write completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Write completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Write completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 [2024-11-19 18:06:01.818122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec14a0 is same with the state(6) to be set 00:07:00.537 Write completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.537 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 [2024-11-19 18:06:01.818532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec1860 is same with the state(6) to be set 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 [2024-11-19 18:06:01.820384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd6a400d680 is same with the state(6) to be set 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 Write completed with error (sct=0, sc=8) 00:07:00.538 Read completed with error (sct=0, sc=8) 00:07:00.538 [2024-11-19 18:06:01.820493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd6a400d020 is same with the state(6) to be set 00:07:00.538 Initializing NVMe Controllers 00:07:00.538 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:00.538 Controller IO queue size 128, less than required. 00:07:00.538 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:00.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:00.538 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:00.538 Initialization complete. Launching workers. 00:07:00.538 ======================================================== 00:07:00.538 Latency(us) 00:07:00.538 Device Information : IOPS MiB/s Average min max 00:07:00.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.60 0.08 888889.85 573.63 1042494.32 00:07:00.538 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.66 0.08 983410.14 513.98 2001842.53 00:07:00.538 ======================================================== 00:07:00.538 Total : 332.26 0.16 934310.53 513.98 2001842.53 00:07:00.538 00:07:00.538 [2024-11-19 18:06:01.821092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec29a0 (9): Bad file descriptor 00:07:00.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:00.538 18:06:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.538 18:06:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:00.538 18:06:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1786009 00:07:00.538 18:06:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:01.109 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:01.109 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1786009 00:07:01.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1786009) - No such process 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1786009 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1786009 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1786009 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.110 [2024-11-19 18:06:02.350446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1786855 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1786855 00:07:01.110 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.110 [2024-11-19 18:06:02.449443] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:01.682 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.682 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1786855 00:07:01.682 18:06:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.942 18:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.942 18:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1786855 00:07:01.942 18:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.514 18:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.515 18:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1786855 00:07:02.515 18:06:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.085 18:06:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.085 18:06:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1786855 00:07:03.085 18:06:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.654 18:06:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.654 18:06:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1786855 00:07:03.654 18:06:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.225 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.225 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1786855 00:07:04.225 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.225 Initializing NVMe Controllers 00:07:04.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:04.225 Controller IO queue size 128, less than required. 00:07:04.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:04.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:04.225 Initialization complete. Launching workers. 00:07:04.225 ======================================================== 00:07:04.225 Latency(us) 00:07:04.225 Device Information : IOPS MiB/s Average min max 00:07:04.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001942.91 1000278.57 1040685.00 00:07:04.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002964.24 1000274.26 1007974.86 00:07:04.225 ======================================================== 00:07:04.225 Total : 256.00 0.12 1002453.58 1000274.26 1040685.00 00:07:04.225 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1786855 00:07:04.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1786855) - No such process 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1786855 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:04.485 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:04.485 rmmod nvme_tcp 00:07:04.485 rmmod nvme_fabrics 00:07:04.485 rmmod nvme_keyring 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1785717 ']' 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1785717 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1785717 ']' 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1785717 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.746 18:06:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1785717 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1785717' 00:07:04.746 killing process with pid 1785717 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1785717 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1785717 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.746 18:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:07.290 00:07:07.290 real 0m18.277s 00:07:07.290 user 0m30.721s 00:07:07.290 sys 0m6.676s 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.290 ************************************ 00:07:07.290 END TEST nvmf_delete_subsystem 00:07:07.290 ************************************ 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.290 ************************************ 00:07:07.290 START TEST nvmf_host_management 00:07:07.290 ************************************ 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.290 * Looking for test storage... 00:07:07.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.290 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.291 --rc genhtml_branch_coverage=1 00:07:07.291 --rc genhtml_function_coverage=1 00:07:07.291 --rc genhtml_legend=1 00:07:07.291 --rc geninfo_all_blocks=1 00:07:07.291 --rc geninfo_unexecuted_blocks=1 00:07:07.291 00:07:07.291 ' 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.291 --rc genhtml_branch_coverage=1 00:07:07.291 --rc genhtml_function_coverage=1 00:07:07.291 --rc genhtml_legend=1 00:07:07.291 --rc geninfo_all_blocks=1 00:07:07.291 --rc geninfo_unexecuted_blocks=1 00:07:07.291 00:07:07.291 ' 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.291 --rc genhtml_branch_coverage=1 00:07:07.291 --rc genhtml_function_coverage=1 00:07:07.291 --rc genhtml_legend=1 00:07:07.291 --rc geninfo_all_blocks=1 00:07:07.291 --rc geninfo_unexecuted_blocks=1 00:07:07.291 00:07:07.291 ' 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.291 --rc genhtml_branch_coverage=1 00:07:07.291 --rc genhtml_function_coverage=1 00:07:07.291 --rc genhtml_legend=1 00:07:07.291 --rc geninfo_all_blocks=1 00:07:07.291 --rc geninfo_unexecuted_blocks=1 00:07:07.291 00:07:07.291 ' 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.291 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.292 18:06:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.433 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:15.434 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:15.434 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:15.434 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:15.434 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:07:15.434 00:07:15.434 --- 10.0.0.2 ping statistics --- 00:07:15.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.434 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:07:15.434 00:07:15.434 --- 10.0.0.1 ping statistics --- 00:07:15.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.434 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:15.434 18:06:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1792259 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1792259 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1792259 ']' 00:07:15.434 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.435 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.435 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.435 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.435 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.435 [2024-11-19 18:06:16.087451] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:07:15.435 [2024-11-19 18:06:16.087515] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.435 [2024-11-19 18:06:16.188757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.435 [2024-11-19 18:06:16.243080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.435 [2024-11-19 18:06:16.243134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.435 [2024-11-19 18:06:16.243144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.435 [2024-11-19 18:06:16.243151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.435 [2024-11-19 18:06:16.243169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.435 [2024-11-19 18:06:16.245198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.435 [2024-11-19 18:06:16.245398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.435 [2024-11-19 18:06:16.245557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.435 [2024-11-19 18:06:16.245558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.696 [2024-11-19 18:06:16.965705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.696 18:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.696 Malloc0 00:07:15.696 [2024-11-19 18:06:17.045811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1792392 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1792392 /var/tmp/bdevperf.sock 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1792392 ']' 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:15.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:15.696 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:15.697 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:15.697 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:15.697 { 00:07:15.697 "params": { 00:07:15.697 "name": "Nvme$subsystem", 00:07:15.697 "trtype": "$TEST_TRANSPORT", 00:07:15.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:15.697 "adrfam": "ipv4", 00:07:15.697 "trsvcid": "$NVMF_PORT", 00:07:15.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:15.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:15.697 "hdgst": ${hdgst:-false}, 00:07:15.697 "ddgst": ${ddgst:-false} 00:07:15.697 }, 00:07:15.697 "method": "bdev_nvme_attach_controller" 00:07:15.697 } 00:07:15.697 EOF 00:07:15.697 )") 00:07:15.697 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:15.697 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:15.697 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:15.697 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:15.697 "params": { 00:07:15.697 "name": "Nvme0", 00:07:15.697 "trtype": "tcp", 00:07:15.697 "traddr": "10.0.0.2", 00:07:15.697 "adrfam": "ipv4", 00:07:15.697 "trsvcid": "4420", 00:07:15.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:15.697 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:15.697 "hdgst": false, 00:07:15.697 "ddgst": false 00:07:15.697 }, 00:07:15.697 "method": "bdev_nvme_attach_controller" 00:07:15.697 }' 00:07:15.958 [2024-11-19 18:06:17.166542] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:07:15.958 [2024-11-19 18:06:17.166610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792392 ] 00:07:15.958 [2024-11-19 18:06:17.259245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.958 [2024-11-19 18:06:17.313205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.220 Running I/O for 10 seconds... 00:07:16.793 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.793 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:16.793 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:16.793 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.793 18:06:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.793 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.793 [2024-11-19 18:06:18.061993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.793 [2024-11-19 18:06:18.062060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.793 [2024-11-19 18:06:18.062080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.793 [2024-11-19 18:06:18.062090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.793 [2024-11-19 18:06:18.062100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.793 [2024-11-19 18:06:18.062109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.793 [2024-11-19 18:06:18.062119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.793 [2024-11-19 18:06:18.062127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.793 [2024-11-19 18:06:18.062137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.793 [2024-11-19 18:06:18.062145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.793 [2024-11-19 18:06:18.062154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.793 [2024-11-19 18:06:18.062172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.793 [2024-11-19 18:06:18.062182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.793 [2024-11-19 18:06:18.062189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.793 [2024-11-19 18:06:18.062199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.793 [2024-11-19 18:06:18.062207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.793 [2024-11-19 18:06:18.062217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.793 [2024-11-19 18:06:18.062224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.793 [2024-11-19 18:06:18.062234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.794 [2024-11-19 18:06:18.062817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.794 [2024-11-19 18:06:18.062825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.062843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.062863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.062880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.062898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.062915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.062932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.062950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.062970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.062986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.062996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.063211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.795 [2024-11-19 18:06:18.063218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.064518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:16.795 task offset: 109312 on job bdev=Nvme0n1 fails 00:07:16.795 00:07:16.795 Latency(us) 00:07:16.795 [2024-11-19T17:06:18.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.795 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:16.795 Job: Nvme0n1 ended in about 0.54 seconds with error 00:07:16.795 Verification LBA range: start 0x0 length 0x400 00:07:16.795 Nvme0n1 : 0.54 1542.67 96.42 118.67 0.00 37525.30 1979.73 35826.35 00:07:16.795 [2024-11-19T17:06:18.266Z] =================================================================================================================== 00:07:16.795 [2024-11-19T17:06:18.266Z] Total : 1542.67 96.42 118.67 0.00 37525.30 1979.73 35826.35 00:07:16.795 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.795 [2024-11-19 18:06:18.066763] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.795 [2024-11-19 18:06:18.066806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d1000 (9): Bad file descriptor 00:07:16.795 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.795 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.795 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.795 [2024-11-19 18:06:18.069114] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:16.795 [2024-11-19 18:06:18.069237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:16.795 [2024-11-19 18:06:18.069269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.795 [2024-11-19 18:06:18.069284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:16.795 [2024-11-19 18:06:18.069294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:16.795 [2024-11-19 18:06:18.069303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:16.795 [2024-11-19 18:06:18.069311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19d1000 00:07:16.795 [2024-11-19 18:06:18.069337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d1000 (9): Bad file descriptor 00:07:16.795 [2024-11-19 18:06:18.069351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:16.795 [2024-11-19 18:06:18.069360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:16.796 [2024-11-19 18:06:18.069380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:16.796 [2024-11-19 18:06:18.069392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:16.796 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.796 18:06:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1792392 00:07:17.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1792392) - No such process 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:17.739 { 00:07:17.739 "params": { 00:07:17.739 "name": "Nvme$subsystem", 00:07:17.739 "trtype": "$TEST_TRANSPORT", 00:07:17.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.739 "adrfam": "ipv4", 00:07:17.739 "trsvcid": "$NVMF_PORT", 00:07:17.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.739 "hdgst": ${hdgst:-false}, 00:07:17.739 "ddgst": ${ddgst:-false} 00:07:17.739 }, 00:07:17.739 "method": "bdev_nvme_attach_controller" 00:07:17.739 } 00:07:17.739 EOF 00:07:17.739 )") 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:17.739 18:06:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:17.739 "params": { 00:07:17.739 "name": "Nvme0", 00:07:17.740 "trtype": "tcp", 00:07:17.740 "traddr": "10.0.0.2", 00:07:17.740 "adrfam": "ipv4", 00:07:17.740 "trsvcid": "4420", 00:07:17.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.740 "hdgst": false, 00:07:17.740 "ddgst": false 00:07:17.740 }, 00:07:17.740 "method": "bdev_nvme_attach_controller" 00:07:17.740 }' 00:07:17.740 [2024-11-19 18:06:19.140265] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:07:17.740 [2024-11-19 18:06:19.140321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792787 ] 00:07:18.000 [2024-11-19 18:06:19.228187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.000 [2024-11-19 18:06:19.263048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.262 Running I/O for 1 seconds... 00:07:19.203 1600.00 IOPS, 100.00 MiB/s 00:07:19.203 Latency(us) 00:07:19.203 [2024-11-19T17:06:20.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.203 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:19.204 Verification LBA range: start 0x0 length 0x400 00:07:19.204 Nvme0n1 : 1.06 1577.04 98.57 0.00 0.00 38412.64 6062.08 52865.71 00:07:19.204 [2024-11-19T17:06:20.675Z] =================================================================================================================== 00:07:19.204 [2024-11-19T17:06:20.675Z] Total : 1577.04 98.57 0.00 0.00 38412.64 6062.08 52865.71 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.465 rmmod nvme_tcp 00:07:19.465 rmmod nvme_fabrics 00:07:19.465 rmmod nvme_keyring 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1792259 ']' 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1792259 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1792259 ']' 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1792259 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1792259 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1792259' 00:07:19.465 killing process with pid 1792259 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1792259 00:07:19.465 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1792259 00:07:19.465 [2024-11-19 18:06:20.933120] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.726 18:06:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.638 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:21.638 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:21.638 00:07:21.638 real 0m14.731s 00:07:21.638 user 0m23.542s 00:07:21.638 sys 0m6.857s 00:07:21.638 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.638 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.638 ************************************ 00:07:21.638 END TEST nvmf_host_management 00:07:21.638 ************************************ 00:07:21.638 18:06:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:21.638 18:06:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.638 18:06:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.638 18:06:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.900 ************************************ 00:07:21.900 START TEST nvmf_lvol 00:07:21.900 ************************************ 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:21.900 * Looking for test storage... 00:07:21.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.900 --rc genhtml_branch_coverage=1 00:07:21.900 --rc genhtml_function_coverage=1 00:07:21.900 --rc genhtml_legend=1 00:07:21.900 --rc geninfo_all_blocks=1 00:07:21.900 --rc geninfo_unexecuted_blocks=1 00:07:21.900 00:07:21.900 ' 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.900 --rc genhtml_branch_coverage=1 00:07:21.900 --rc genhtml_function_coverage=1 00:07:21.900 --rc genhtml_legend=1 00:07:21.900 --rc geninfo_all_blocks=1 00:07:21.900 --rc geninfo_unexecuted_blocks=1 00:07:21.900 00:07:21.900 ' 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.900 --rc genhtml_branch_coverage=1 00:07:21.900 --rc genhtml_function_coverage=1 00:07:21.900 --rc genhtml_legend=1 00:07:21.900 --rc geninfo_all_blocks=1 00:07:21.900 --rc geninfo_unexecuted_blocks=1 00:07:21.900 00:07:21.900 ' 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.900 --rc genhtml_branch_coverage=1 00:07:21.900 --rc genhtml_function_coverage=1 00:07:21.900 --rc genhtml_legend=1 00:07:21.900 --rc geninfo_all_blocks=1 00:07:21.900 --rc geninfo_unexecuted_blocks=1 00:07:21.900 00:07:21.900 ' 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.900 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.901 18:06:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.049 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:30.050 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:30.050 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:30.050 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:30.050 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:07:30.050 00:07:30.050 --- 10.0.0.2 ping statistics --- 00:07:30.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.050 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:07:30.050 00:07:30.050 --- 10.0.0.1 ping statistics --- 00:07:30.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.050 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1797430 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1797430 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1797430 ']' 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.050 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.051 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.051 18:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.051 [2024-11-19 18:06:30.900473] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:07:30.051 [2024-11-19 18:06:30.900538] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.051 [2024-11-19 18:06:31.001235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.051 [2024-11-19 18:06:31.053423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.051 [2024-11-19 18:06:31.053476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.051 [2024-11-19 18:06:31.053484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.051 [2024-11-19 18:06:31.053491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.051 [2024-11-19 18:06:31.053497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.051 [2024-11-19 18:06:31.055547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.051 [2024-11-19 18:06:31.055711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.051 [2024-11-19 18:06:31.055711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.312 18:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.312 18:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:30.312 18:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.312 18:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.312 18:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.312 18:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.312 18:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.573 [2024-11-19 18:06:31.942908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.573 18:06:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.834 18:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:30.834 18:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.095 18:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:31.095 18:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:31.356 18:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:31.618 18:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ad24ca4d-983e-4d53-bbd2-d34830a90565 00:07:31.618 18:06:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ad24ca4d-983e-4d53-bbd2-d34830a90565 lvol 20 00:07:31.618 18:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1ee455df-cf87-43ab-9649-53012ab67c63 00:07:31.618 18:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.880 18:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1ee455df-cf87-43ab-9649-53012ab67c63 00:07:32.142 18:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:32.142 [2024-11-19 18:06:33.578421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.142 18:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.402 18:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1798128 00:07:32.402 18:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:32.402 18:06:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:33.345 18:06:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1ee455df-cf87-43ab-9649-53012ab67c63 MY_SNAPSHOT 00:07:33.606 18:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=091948a6-215d-4d14-89dd-f955c6f50636 00:07:33.606 18:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1ee455df-cf87-43ab-9649-53012ab67c63 30 00:07:33.891 18:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 091948a6-215d-4d14-89dd-f955c6f50636 MY_CLONE 00:07:34.153 18:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6765b7e4-7fff-4325-92da-c48dd98bb691 00:07:34.153 18:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6765b7e4-7fff-4325-92da-c48dd98bb691 00:07:34.414 18:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1798128 00:07:44.418 Initializing NVMe Controllers 00:07:44.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:44.418 Controller IO queue size 128, less than required. 00:07:44.419 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:44.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:44.419 Initialization complete. Launching workers. 00:07:44.419 ======================================================== 00:07:44.419 Latency(us) 00:07:44.419 Device Information : IOPS MiB/s Average min max 00:07:44.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16197.70 63.27 7903.86 1531.93 51640.87 00:07:44.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17492.80 68.33 7318.58 771.05 59105.94 00:07:44.419 ======================================================== 00:07:44.419 Total : 33690.50 131.60 7599.97 771.05 59105.94 00:07:44.419 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1ee455df-cf87-43ab-9649-53012ab67c63 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ad24ca4d-983e-4d53-bbd2-d34830a90565 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.419 rmmod nvme_tcp 00:07:44.419 rmmod nvme_fabrics 00:07:44.419 rmmod nvme_keyring 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1797430 ']' 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1797430 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1797430 ']' 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1797430 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1797430 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1797430' 00:07:44.419 killing process with pid 1797430 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1797430 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1797430 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.419 18:06:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.803 00:07:45.803 real 0m23.946s 00:07:45.803 user 1m4.972s 00:07:45.803 sys 0m8.541s 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.803 ************************************ 00:07:45.803 END TEST nvmf_lvol 00:07:45.803 ************************************ 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.803 ************************************ 00:07:45.803 START TEST nvmf_lvs_grow 00:07:45.803 ************************************ 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.803 * Looking for test storage... 00:07:45.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.803 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.065 --rc genhtml_branch_coverage=1 00:07:46.065 --rc genhtml_function_coverage=1 00:07:46.065 --rc genhtml_legend=1 00:07:46.065 --rc geninfo_all_blocks=1 00:07:46.065 --rc geninfo_unexecuted_blocks=1 00:07:46.065 00:07:46.065 ' 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.065 --rc genhtml_branch_coverage=1 00:07:46.065 --rc genhtml_function_coverage=1 00:07:46.065 --rc genhtml_legend=1 00:07:46.065 --rc geninfo_all_blocks=1 00:07:46.065 --rc geninfo_unexecuted_blocks=1 00:07:46.065 00:07:46.065 ' 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.065 --rc genhtml_branch_coverage=1 00:07:46.065 --rc genhtml_function_coverage=1 00:07:46.065 --rc genhtml_legend=1 00:07:46.065 --rc geninfo_all_blocks=1 00:07:46.065 --rc geninfo_unexecuted_blocks=1 00:07:46.065 00:07:46.065 ' 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.065 --rc genhtml_branch_coverage=1 00:07:46.065 --rc genhtml_function_coverage=1 00:07:46.065 --rc genhtml_legend=1 00:07:46.065 --rc geninfo_all_blocks=1 00:07:46.065 --rc geninfo_unexecuted_blocks=1 00:07:46.065 00:07:46.065 ' 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.065 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.066 18:06:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.213 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:54.214 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:54.214 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:54.214 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:54.214 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:54.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:07:54.214 00:07:54.214 --- 10.0.0.2 ping statistics --- 00:07:54.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.214 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:07:54.214 00:07:54.214 --- 10.0.0.1 ping statistics --- 00:07:54.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.214 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1804504 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1804504 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1804504 ']' 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.214 18:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.214 [2024-11-19 18:06:54.972997] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:07:54.214 [2024-11-19 18:06:54.973057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.214 [2024-11-19 18:06:55.072317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.215 [2024-11-19 18:06:55.123312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.215 [2024-11-19 18:06:55.123367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.215 [2024-11-19 18:06:55.123376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.215 [2024-11-19 18:06:55.123383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.215 [2024-11-19 18:06:55.123389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.215 [2024-11-19 18:06:55.124140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.476 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.476 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:54.476 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:54.476 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:54.476 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.476 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.476 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:54.738 [2024-11-19 18:06:56.014327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.738 ************************************ 00:07:54.738 START TEST lvs_grow_clean 00:07:54.738 ************************************ 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.738 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.999 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:54.999 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:55.262 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bb638554-f263-4846-a84c-e0c1055f44a5 00:07:55.262 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb638554-f263-4846-a84c-e0c1055f44a5 00:07:55.262 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:55.262 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:55.262 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:55.262 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bb638554-f263-4846-a84c-e0c1055f44a5 lvol 150 00:07:55.522 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6d9b1243-3ed7-49dc-8139-d6d2731cd30f 00:07:55.522 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.522 18:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:55.783 [2024-11-19 18:06:57.019928] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:55.783 [2024-11-19 18:06:57.020003] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:55.783 true 00:07:55.783 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb638554-f263-4846-a84c-e0c1055f44a5 00:07:55.783 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:55.783 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:55.783 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.043 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d9b1243-3ed7-49dc-8139-d6d2731cd30f 00:07:56.304 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.304 [2024-11-19 18:06:57.726171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.304 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.564 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1805215 00:07:56.564 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.565 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:56.565 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1805215 /var/tmp/bdevperf.sock 00:07:56.565 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1805215 ']' 00:07:56.565 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:56.565 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.565 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:56.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:56.565 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.565 18:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:56.565 [2024-11-19 18:06:57.977947] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:07:56.565 [2024-11-19 18:06:57.978016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805215 ] 00:07:56.826 [2024-11-19 18:06:58.070434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.826 [2024-11-19 18:06:58.122616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.397 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.397 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:57.397 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:57.968 Nvme0n1 00:07:57.968 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:57.968 [ 00:07:57.968 { 00:07:57.968 "name": "Nvme0n1", 00:07:57.968 "aliases": [ 00:07:57.968 "6d9b1243-3ed7-49dc-8139-d6d2731cd30f" 00:07:57.968 ], 00:07:57.968 "product_name": "NVMe disk", 00:07:57.968 "block_size": 4096, 00:07:57.968 "num_blocks": 38912, 00:07:57.968 "uuid": "6d9b1243-3ed7-49dc-8139-d6d2731cd30f", 00:07:57.968 "numa_id": 0, 00:07:57.968 "assigned_rate_limits": { 00:07:57.968 "rw_ios_per_sec": 0, 00:07:57.968 "rw_mbytes_per_sec": 0, 00:07:57.968 "r_mbytes_per_sec": 0, 00:07:57.968 "w_mbytes_per_sec": 0 00:07:57.968 }, 00:07:57.968 "claimed": false, 00:07:57.968 "zoned": false, 00:07:57.968 "supported_io_types": { 00:07:57.968 "read": true, 00:07:57.968 "write": true, 00:07:57.968 "unmap": true, 00:07:57.968 "flush": true, 00:07:57.968 "reset": true, 00:07:57.968 "nvme_admin": true, 00:07:57.968 "nvme_io": true, 00:07:57.968 "nvme_io_md": false, 00:07:57.968 "write_zeroes": true, 00:07:57.968 "zcopy": false, 00:07:57.968 "get_zone_info": false, 00:07:57.968 "zone_management": false, 00:07:57.968 "zone_append": false, 00:07:57.968 "compare": true, 00:07:57.968 "compare_and_write": true, 00:07:57.968 "abort": true, 00:07:57.968 "seek_hole": false, 00:07:57.968 "seek_data": false, 00:07:57.968 "copy": true, 00:07:57.968 "nvme_iov_md": false 00:07:57.968 }, 00:07:57.968 "memory_domains": [ 00:07:57.968 { 00:07:57.968 "dma_device_id": "system", 00:07:57.968 "dma_device_type": 1 00:07:57.968 } 00:07:57.968 ], 00:07:57.968 "driver_specific": { 00:07:57.968 "nvme": [ 00:07:57.968 { 00:07:57.968 "trid": { 00:07:57.968 "trtype": "TCP", 00:07:57.968 "adrfam": "IPv4", 00:07:57.968 "traddr": "10.0.0.2", 00:07:57.968 "trsvcid": "4420", 00:07:57.968 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:57.968 }, 00:07:57.968 "ctrlr_data": { 00:07:57.968 "cntlid": 1, 00:07:57.968 "vendor_id": "0x8086", 00:07:57.968 "model_number": "SPDK bdev Controller", 00:07:57.968 "serial_number": "SPDK0", 00:07:57.968 "firmware_revision": "25.01", 00:07:57.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.968 "oacs": { 00:07:57.968 "security": 0, 00:07:57.968 "format": 0, 00:07:57.968 "firmware": 0, 00:07:57.968 "ns_manage": 0 00:07:57.968 }, 00:07:57.968 "multi_ctrlr": true, 00:07:57.968 "ana_reporting": false 00:07:57.968 }, 00:07:57.968 "vs": { 00:07:57.968 "nvme_version": "1.3" 00:07:57.968 }, 00:07:57.968 "ns_data": { 00:07:57.968 "id": 1, 00:07:57.968 "can_share": true 00:07:57.968 } 00:07:57.968 } 00:07:57.968 ], 00:07:57.968 "mp_policy": "active_passive" 00:07:57.968 } 00:07:57.968 } 00:07:57.968 ] 00:07:57.968 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1805413 00:07:57.968 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:57.968 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:58.229 Running I/O for 10 seconds... 00:07:59.172 Latency(us) 00:07:59.172 [2024-11-19T17:07:00.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.172 Nvme0n1 : 1.00 24515.00 95.76 0.00 0.00 0.00 0.00 0.00 00:07:59.172 [2024-11-19T17:07:00.643Z] =================================================================================================================== 00:07:59.172 [2024-11-19T17:07:00.643Z] Total : 24515.00 95.76 0.00 0.00 0.00 0.00 0.00 00:07:59.172 00:08:00.111 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bb638554-f263-4846-a84c-e0c1055f44a5 00:08:00.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.111 Nvme0n1 : 2.00 24978.50 97.57 0.00 0.00 0.00 0.00 0.00 00:08:00.111 [2024-11-19T17:07:01.582Z] =================================================================================================================== 00:08:00.111 [2024-11-19T17:07:01.582Z] Total : 24978.50 97.57 0.00 0.00 0.00 0.00 0.00 00:08:00.111 00:08:00.111 true 00:08:00.111 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb638554-f263-4846-a84c-e0c1055f44a5 00:08:00.111 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:00.371 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:00.371 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:00.371 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1805413 00:08:01.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.312 Nvme0n1 : 3.00 25174.33 98.34 0.00 0.00 0.00 0.00 0.00 00:08:01.312 [2024-11-19T17:07:02.783Z] =================================================================================================================== 00:08:01.312 [2024-11-19T17:07:02.783Z] Total : 25174.33 98.34 0.00 0.00 0.00 0.00 0.00 00:08:01.312 00:08:02.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.252 Nvme0n1 : 4.00 25273.75 98.73 0.00 0.00 0.00 0.00 0.00 00:08:02.252 [2024-11-19T17:07:03.723Z] =================================================================================================================== 00:08:02.252 [2024-11-19T17:07:03.723Z] Total : 25273.75 98.73 0.00 0.00 0.00 0.00 0.00 00:08:02.252 00:08:03.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.192 Nvme0n1 : 5.00 25338.60 98.98 0.00 0.00 0.00 0.00 0.00 00:08:03.192 [2024-11-19T17:07:04.663Z] =================================================================================================================== 00:08:03.192 [2024-11-19T17:07:04.663Z] Total : 25338.60 98.98 0.00 0.00 0.00 0.00 0.00 00:08:03.192 00:08:04.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.132 Nvme0n1 : 6.00 25392.67 99.19 0.00 0.00 0.00 0.00 0.00 00:08:04.132 [2024-11-19T17:07:05.603Z] =================================================================================================================== 00:08:04.132 [2024-11-19T17:07:05.603Z] Total : 25392.67 99.19 0.00 0.00 0.00 0.00 0.00 00:08:04.132 00:08:05.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.072 Nvme0n1 : 7.00 25422.14 99.31 0.00 0.00 0.00 0.00 0.00 00:08:05.072 [2024-11-19T17:07:06.543Z] =================================================================================================================== 00:08:05.072 [2024-11-19T17:07:06.543Z] Total : 25422.14 99.31 0.00 0.00 0.00 0.00 0.00 00:08:05.072 00:08:06.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.014 Nvme0n1 : 8.00 25460.50 99.46 0.00 0.00 0.00 0.00 0.00 00:08:06.014 [2024-11-19T17:07:07.485Z] =================================================================================================================== 00:08:06.014 [2024-11-19T17:07:07.485Z] Total : 25460.50 99.46 0.00 0.00 0.00 0.00 0.00 00:08:06.014 00:08:07.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.398 Nvme0n1 : 9.00 25482.89 99.54 0.00 0.00 0.00 0.00 0.00 00:08:07.398 [2024-11-19T17:07:08.869Z] =================================================================================================================== 00:08:07.398 [2024-11-19T17:07:08.869Z] Total : 25482.89 99.54 0.00 0.00 0.00 0.00 0.00 00:08:07.398 00:08:08.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.340 Nvme0n1 : 10.00 25500.80 99.61 0.00 0.00 0.00 0.00 0.00 00:08:08.340 [2024-11-19T17:07:09.811Z] =================================================================================================================== 00:08:08.340 [2024-11-19T17:07:09.811Z] Total : 25500.80 99.61 0.00 0.00 0.00 0.00 0.00 00:08:08.340 00:08:08.340 00:08:08.340 Latency(us) 00:08:08.340 [2024-11-19T17:07:09.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.340 Nvme0n1 : 10.00 25503.44 99.62 0.00 0.00 5015.47 2484.91 17585.49 00:08:08.340 [2024-11-19T17:07:09.811Z] =================================================================================================================== 00:08:08.340 [2024-11-19T17:07:09.811Z] Total : 25503.44 99.62 0.00 0.00 5015.47 2484.91 17585.49 00:08:08.340 { 00:08:08.340 "results": [ 00:08:08.340 { 00:08:08.340 "job": "Nvme0n1", 00:08:08.340 "core_mask": "0x2", 00:08:08.340 "workload": "randwrite", 00:08:08.340 "status": "finished", 00:08:08.340 "queue_depth": 128, 00:08:08.340 "io_size": 4096, 00:08:08.340 "runtime": 10.003983, 00:08:08.340 "iops": 25503.441979059742, 00:08:08.340 "mibps": 99.62282023070212, 00:08:08.340 "io_failed": 0, 00:08:08.340 "io_timeout": 0, 00:08:08.340 "avg_latency_us": 5015.473891048957, 00:08:08.340 "min_latency_us": 2484.9066666666668, 00:08:08.340 "max_latency_us": 17585.493333333332 00:08:08.340 } 00:08:08.340 ], 00:08:08.340 "core_count": 1 00:08:08.340 } 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1805215 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1805215 ']' 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1805215 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1805215 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1805215' 00:08:08.340 killing process with pid 1805215 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1805215 00:08:08.340 Received shutdown signal, test time was about 10.000000 seconds 00:08:08.340 00:08:08.340 Latency(us) 00:08:08.340 [2024-11-19T17:07:09.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.340 [2024-11-19T17:07:09.811Z] =================================================================================================================== 00:08:08.340 [2024-11-19T17:07:09.811Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1805215 00:08:08.340 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.600 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:08.600 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb638554-f263-4846-a84c-e0c1055f44a5 00:08:08.600 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:08.860 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:08.860 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:08.860 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.121 [2024-11-19 18:07:10.372449] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb638554-f263-4846-a84c-e0c1055f44a5 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb638554-f263-4846-a84c-e0c1055f44a5 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:09.121 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb638554-f263-4846-a84c-e0c1055f44a5 00:08:09.121 request: 00:08:09.121 { 00:08:09.121 "uuid": "bb638554-f263-4846-a84c-e0c1055f44a5", 00:08:09.121 "method": "bdev_lvol_get_lvstores", 00:08:09.121 "req_id": 1 00:08:09.121 } 00:08:09.121 Got JSON-RPC error response 00:08:09.121 response: 00:08:09.121 { 00:08:09.121 "code": -19, 00:08:09.121 "message": "No such device" 00:08:09.121 } 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.382 aio_bdev 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6d9b1243-3ed7-49dc-8139-d6d2731cd30f 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6d9b1243-3ed7-49dc-8139-d6d2731cd30f 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.382 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:09.643 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6d9b1243-3ed7-49dc-8139-d6d2731cd30f -t 2000 00:08:09.643 [ 00:08:09.643 { 00:08:09.643 "name": "6d9b1243-3ed7-49dc-8139-d6d2731cd30f", 00:08:09.643 "aliases": [ 00:08:09.643 "lvs/lvol" 00:08:09.643 ], 00:08:09.643 "product_name": "Logical Volume", 00:08:09.643 "block_size": 4096, 00:08:09.643 "num_blocks": 38912, 00:08:09.643 "uuid": "6d9b1243-3ed7-49dc-8139-d6d2731cd30f", 00:08:09.643 "assigned_rate_limits": { 00:08:09.643 "rw_ios_per_sec": 0, 00:08:09.643 "rw_mbytes_per_sec": 0, 00:08:09.643 "r_mbytes_per_sec": 0, 00:08:09.643 "w_mbytes_per_sec": 0 00:08:09.643 }, 00:08:09.643 "claimed": false, 00:08:09.643 "zoned": false, 00:08:09.643 "supported_io_types": { 00:08:09.643 "read": true, 00:08:09.643 "write": true, 00:08:09.643 "unmap": true, 00:08:09.643 "flush": false, 00:08:09.643 "reset": true, 00:08:09.643 "nvme_admin": false, 00:08:09.643 "nvme_io": false, 00:08:09.643 "nvme_io_md": false, 00:08:09.643 "write_zeroes": true, 00:08:09.643 "zcopy": false, 00:08:09.643 "get_zone_info": false, 00:08:09.643 "zone_management": false, 00:08:09.643 "zone_append": false, 00:08:09.643 "compare": false, 00:08:09.643 "compare_and_write": false, 00:08:09.643 "abort": false, 00:08:09.643 "seek_hole": true, 00:08:09.643 "seek_data": true, 00:08:09.643 "copy": false, 00:08:09.643 "nvme_iov_md": false 00:08:09.643 }, 00:08:09.643 "driver_specific": { 00:08:09.643 "lvol": { 00:08:09.643 "lvol_store_uuid": "bb638554-f263-4846-a84c-e0c1055f44a5", 00:08:09.643 "base_bdev": "aio_bdev", 00:08:09.643 "thin_provision": false, 00:08:09.643 "num_allocated_clusters": 38, 00:08:09.643 "snapshot": false, 00:08:09.643 "clone": false, 00:08:09.643 "esnap_clone": false 00:08:09.643 } 00:08:09.643 } 00:08:09.643 } 00:08:09.643 ] 00:08:09.904 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:09.904 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb638554-f263-4846-a84c-e0c1055f44a5 00:08:09.904 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:09.904 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:09.904 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bb638554-f263-4846-a84c-e0c1055f44a5 00:08:09.904 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:10.164 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:10.164 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6d9b1243-3ed7-49dc-8139-d6d2731cd30f 00:08:10.424 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb638554-f263-4846-a84c-e0c1055f44a5 00:08:10.424 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.684 00:08:10.684 real 0m15.949s 00:08:10.684 user 0m15.651s 00:08:10.684 sys 0m1.443s 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:10.684 ************************************ 00:08:10.684 END TEST lvs_grow_clean 00:08:10.684 ************************************ 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.684 ************************************ 00:08:10.684 START TEST lvs_grow_dirty 00:08:10.684 ************************************ 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:10.684 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.944 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:10.944 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:11.204 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:11.204 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:11.204 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:11.204 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:11.204 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:11.204 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e2f7912d-dac0-4960-84bc-fb398410cfc3 lvol 150 00:08:11.464 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fca61504-003c-4fbe-bad6-56edb1e93ae4 00:08:11.465 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.465 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:11.724 [2024-11-19 18:07:12.972776] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:11.724 [2024-11-19 18:07:12.972820] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:11.724 true 00:08:11.724 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:11.724 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:11.724 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:11.724 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.985 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fca61504-003c-4fbe-bad6-56edb1e93ae4 00:08:12.246 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.246 [2024-11-19 18:07:13.658742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.246 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.506 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:12.506 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1808308 00:08:12.506 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.506 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1808308 /var/tmp/bdevperf.sock 00:08:12.506 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1808308 ']' 00:08:12.506 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.506 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.507 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.507 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.507 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:12.507 [2024-11-19 18:07:13.890234] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:08:12.507 [2024-11-19 18:07:13.890284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808308 ] 00:08:12.507 [2024-11-19 18:07:13.973479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.767 [2024-11-19 18:07:14.003151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.767 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.767 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:12.767 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:13.028 Nvme0n1 00:08:13.028 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:13.290 [ 00:08:13.290 { 00:08:13.290 "name": "Nvme0n1", 00:08:13.290 "aliases": [ 00:08:13.290 "fca61504-003c-4fbe-bad6-56edb1e93ae4" 00:08:13.290 ], 00:08:13.290 "product_name": "NVMe disk", 00:08:13.290 "block_size": 4096, 00:08:13.290 "num_blocks": 38912, 00:08:13.290 "uuid": "fca61504-003c-4fbe-bad6-56edb1e93ae4", 00:08:13.290 "numa_id": 0, 00:08:13.290 "assigned_rate_limits": { 00:08:13.290 "rw_ios_per_sec": 0, 00:08:13.290 "rw_mbytes_per_sec": 0, 00:08:13.290 "r_mbytes_per_sec": 0, 00:08:13.290 "w_mbytes_per_sec": 0 00:08:13.290 }, 00:08:13.290 "claimed": false, 00:08:13.290 "zoned": false, 00:08:13.290 "supported_io_types": { 00:08:13.290 "read": true, 00:08:13.290 "write": true, 00:08:13.290 "unmap": true, 00:08:13.290 "flush": true, 00:08:13.290 "reset": true, 00:08:13.290 "nvme_admin": true, 00:08:13.290 "nvme_io": true, 00:08:13.290 "nvme_io_md": false, 00:08:13.290 "write_zeroes": true, 00:08:13.290 "zcopy": false, 00:08:13.290 "get_zone_info": false, 00:08:13.290 "zone_management": false, 00:08:13.290 "zone_append": false, 00:08:13.290 "compare": true, 00:08:13.290 "compare_and_write": true, 00:08:13.290 "abort": true, 00:08:13.290 "seek_hole": false, 00:08:13.290 "seek_data": false, 00:08:13.290 "copy": true, 00:08:13.290 "nvme_iov_md": false 00:08:13.290 }, 00:08:13.290 "memory_domains": [ 00:08:13.290 { 00:08:13.290 "dma_device_id": "system", 00:08:13.290 "dma_device_type": 1 00:08:13.290 } 00:08:13.290 ], 00:08:13.290 "driver_specific": { 00:08:13.290 "nvme": [ 00:08:13.290 { 00:08:13.290 "trid": { 00:08:13.290 "trtype": "TCP", 00:08:13.290 "adrfam": "IPv4", 00:08:13.290 "traddr": "10.0.0.2", 00:08:13.290 "trsvcid": "4420", 00:08:13.290 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:13.290 }, 00:08:13.290 "ctrlr_data": { 00:08:13.290 "cntlid": 1, 00:08:13.290 "vendor_id": "0x8086", 00:08:13.290 "model_number": "SPDK bdev Controller", 00:08:13.290 "serial_number": "SPDK0", 00:08:13.290 "firmware_revision": "25.01", 00:08:13.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.290 "oacs": { 00:08:13.290 "security": 0, 00:08:13.290 "format": 0, 00:08:13.290 "firmware": 0, 00:08:13.290 "ns_manage": 0 00:08:13.290 }, 00:08:13.290 "multi_ctrlr": true, 00:08:13.290 "ana_reporting": false 00:08:13.290 }, 00:08:13.290 "vs": { 00:08:13.290 "nvme_version": "1.3" 00:08:13.290 }, 00:08:13.290 "ns_data": { 00:08:13.290 "id": 1, 00:08:13.290 "can_share": true 00:08:13.290 } 00:08:13.290 } 00:08:13.290 ], 00:08:13.290 "mp_policy": "active_passive" 00:08:13.290 } 00:08:13.290 } 00:08:13.290 ] 00:08:13.290 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1808435 00:08:13.290 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:13.290 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.290 Running I/O for 10 seconds... 00:08:14.232 Latency(us) 00:08:14.232 [2024-11-19T17:07:15.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.232 Nvme0n1 : 1.00 24988.00 97.61 0.00 0.00 0.00 0.00 0.00 00:08:14.232 [2024-11-19T17:07:15.703Z] =================================================================================================================== 00:08:14.232 [2024-11-19T17:07:15.703Z] Total : 24988.00 97.61 0.00 0.00 0.00 0.00 0.00 00:08:14.232 00:08:15.175 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:15.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.436 Nvme0n1 : 2.00 25152.50 98.25 0.00 0.00 0.00 0.00 0.00 00:08:15.436 [2024-11-19T17:07:16.907Z] =================================================================================================================== 00:08:15.436 [2024-11-19T17:07:16.907Z] Total : 25152.50 98.25 0.00 0.00 0.00 0.00 0.00 00:08:15.436 00:08:15.436 true 00:08:15.436 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:15.436 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:15.697 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:15.697 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:15.697 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1808435 00:08:16.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.268 Nvme0n1 : 3.00 25211.67 98.48 0.00 0.00 0.00 0.00 0.00 00:08:16.268 [2024-11-19T17:07:17.739Z] =================================================================================================================== 00:08:16.268 [2024-11-19T17:07:17.739Z] Total : 25211.67 98.48 0.00 0.00 0.00 0.00 0.00 00:08:16.268 00:08:17.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.209 Nvme0n1 : 4.00 25259.50 98.67 0.00 0.00 0.00 0.00 0.00 00:08:17.209 [2024-11-19T17:07:18.680Z] =================================================================================================================== 00:08:17.209 [2024-11-19T17:07:18.680Z] Total : 25259.50 98.67 0.00 0.00 0.00 0.00 0.00 00:08:17.209 00:08:18.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.616 Nvme0n1 : 5.00 25295.80 98.81 0.00 0.00 0.00 0.00 0.00 00:08:18.616 [2024-11-19T17:07:20.087Z] =================================================================================================================== 00:08:18.616 [2024-11-19T17:07:20.087Z] Total : 25295.80 98.81 0.00 0.00 0.00 0.00 0.00 00:08:18.616 00:08:19.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.557 Nvme0n1 : 6.00 25321.17 98.91 0.00 0.00 0.00 0.00 0.00 00:08:19.557 [2024-11-19T17:07:21.028Z] =================================================================================================================== 00:08:19.557 [2024-11-19T17:07:21.028Z] Total : 25321.17 98.91 0.00 0.00 0.00 0.00 0.00 00:08:19.557 00:08:20.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.495 Nvme0n1 : 7.00 25341.14 98.99 0.00 0.00 0.00 0.00 0.00 00:08:20.495 [2024-11-19T17:07:21.966Z] =================================================================================================================== 00:08:20.495 [2024-11-19T17:07:21.966Z] Total : 25341.14 98.99 0.00 0.00 0.00 0.00 0.00 00:08:20.495 00:08:21.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.434 Nvme0n1 : 8.00 25355.25 99.04 0.00 0.00 0.00 0.00 0.00 00:08:21.434 [2024-11-19T17:07:22.905Z] =================================================================================================================== 00:08:21.434 [2024-11-19T17:07:22.905Z] Total : 25355.25 99.04 0.00 0.00 0.00 0.00 0.00 00:08:21.434 00:08:22.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.374 Nvme0n1 : 9.00 25373.11 99.11 0.00 0.00 0.00 0.00 0.00 00:08:22.374 [2024-11-19T17:07:23.845Z] =================================================================================================================== 00:08:22.374 [2024-11-19T17:07:23.845Z] Total : 25373.11 99.11 0.00 0.00 0.00 0.00 0.00 00:08:22.374 00:08:23.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.322 Nvme0n1 : 10.00 25384.90 99.16 0.00 0.00 0.00 0.00 0.00 00:08:23.322 [2024-11-19T17:07:24.793Z] =================================================================================================================== 00:08:23.322 [2024-11-19T17:07:24.793Z] Total : 25384.90 99.16 0.00 0.00 0.00 0.00 0.00 00:08:23.322 00:08:23.322 00:08:23.322 Latency(us) 00:08:23.322 [2024-11-19T17:07:24.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.322 Nvme0n1 : 10.00 25387.35 99.17 0.00 0.00 5039.05 3017.39 11960.32 00:08:23.322 [2024-11-19T17:07:24.793Z] =================================================================================================================== 00:08:23.322 [2024-11-19T17:07:24.793Z] Total : 25387.35 99.17 0.00 0.00 5039.05 3017.39 11960.32 00:08:23.322 { 00:08:23.322 "results": [ 00:08:23.322 { 00:08:23.322 "job": "Nvme0n1", 00:08:23.322 "core_mask": "0x2", 00:08:23.322 "workload": "randwrite", 00:08:23.322 "status": "finished", 00:08:23.322 "queue_depth": 128, 00:08:23.322 "io_size": 4096, 00:08:23.322 "runtime": 10.004078, 00:08:23.322 "iops": 25387.34703987714, 00:08:23.322 "mibps": 99.16932437452007, 00:08:23.322 "io_failed": 0, 00:08:23.322 "io_timeout": 0, 00:08:23.322 "avg_latency_us": 5039.051221593556, 00:08:23.322 "min_latency_us": 3017.3866666666668, 00:08:23.322 "max_latency_us": 11960.32 00:08:23.322 } 00:08:23.322 ], 00:08:23.322 "core_count": 1 00:08:23.322 } 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1808308 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1808308 ']' 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1808308 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1808308 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1808308' 00:08:23.322 killing process with pid 1808308 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1808308 00:08:23.322 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.322 00:08:23.322 Latency(us) 00:08:23.322 [2024-11-19T17:07:24.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.322 [2024-11-19T17:07:24.793Z] =================================================================================================================== 00:08:23.322 [2024-11-19T17:07:24.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.322 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1808308 00:08:23.582 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.582 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:23.843 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:23.843 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1804504 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1804504 00:08:24.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1804504 Killed "${NVMF_APP[@]}" "$@" 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1810669 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1810669 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1810669 ']' 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.103 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.103 [2024-11-19 18:07:25.504367] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:08:24.104 [2024-11-19 18:07:25.504425] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.364 [2024-11-19 18:07:25.592927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.364 [2024-11-19 18:07:25.623020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.364 [2024-11-19 18:07:25.623046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.364 [2024-11-19 18:07:25.623052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.364 [2024-11-19 18:07:25.623057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.364 [2024-11-19 18:07:25.623062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.364 [2024-11-19 18:07:25.623495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.935 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.935 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:24.935 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.935 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.935 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.935 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.935 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.196 [2024-11-19 18:07:26.488650] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:25.196 [2024-11-19 18:07:26.488724] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:25.196 [2024-11-19 18:07:26.488746] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:25.196 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:25.196 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fca61504-003c-4fbe-bad6-56edb1e93ae4 00:08:25.196 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fca61504-003c-4fbe-bad6-56edb1e93ae4 00:08:25.196 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.196 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:25.196 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.196 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.196 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.457 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fca61504-003c-4fbe-bad6-56edb1e93ae4 -t 2000 00:08:25.457 [ 00:08:25.457 { 00:08:25.457 "name": "fca61504-003c-4fbe-bad6-56edb1e93ae4", 00:08:25.457 "aliases": [ 00:08:25.457 "lvs/lvol" 00:08:25.457 ], 00:08:25.457 "product_name": "Logical Volume", 00:08:25.457 "block_size": 4096, 00:08:25.458 "num_blocks": 38912, 00:08:25.458 "uuid": "fca61504-003c-4fbe-bad6-56edb1e93ae4", 00:08:25.458 "assigned_rate_limits": { 00:08:25.458 "rw_ios_per_sec": 0, 00:08:25.458 "rw_mbytes_per_sec": 0, 00:08:25.458 "r_mbytes_per_sec": 0, 00:08:25.458 "w_mbytes_per_sec": 0 00:08:25.458 }, 00:08:25.458 "claimed": false, 00:08:25.458 "zoned": false, 00:08:25.458 "supported_io_types": { 00:08:25.458 "read": true, 00:08:25.458 "write": true, 00:08:25.458 "unmap": true, 00:08:25.458 "flush": false, 00:08:25.458 "reset": true, 00:08:25.458 "nvme_admin": false, 00:08:25.458 "nvme_io": false, 00:08:25.458 "nvme_io_md": false, 00:08:25.458 "write_zeroes": true, 00:08:25.458 "zcopy": false, 00:08:25.458 "get_zone_info": false, 00:08:25.458 "zone_management": false, 00:08:25.458 "zone_append": false, 00:08:25.458 "compare": false, 00:08:25.458 "compare_and_write": false, 00:08:25.458 "abort": false, 00:08:25.458 "seek_hole": true, 00:08:25.458 "seek_data": true, 00:08:25.458 "copy": false, 00:08:25.458 "nvme_iov_md": false 00:08:25.458 }, 00:08:25.458 "driver_specific": { 00:08:25.458 "lvol": { 00:08:25.458 "lvol_store_uuid": "e2f7912d-dac0-4960-84bc-fb398410cfc3", 00:08:25.458 "base_bdev": "aio_bdev", 00:08:25.458 "thin_provision": false, 00:08:25.458 "num_allocated_clusters": 38, 00:08:25.458 "snapshot": false, 00:08:25.458 "clone": false, 00:08:25.458 "esnap_clone": false 00:08:25.458 } 00:08:25.458 } 00:08:25.458 } 00:08:25.458 ] 00:08:25.458 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:25.458 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:25.458 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:25.719 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:25.719 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:25.719 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:25.719 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:25.719 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.979 [2024-11-19 18:07:27.313230] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:25.979 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:26.240 request: 00:08:26.240 { 00:08:26.240 "uuid": "e2f7912d-dac0-4960-84bc-fb398410cfc3", 00:08:26.240 "method": "bdev_lvol_get_lvstores", 00:08:26.240 "req_id": 1 00:08:26.240 } 00:08:26.240 Got JSON-RPC error response 00:08:26.240 response: 00:08:26.240 { 00:08:26.240 "code": -19, 00:08:26.240 "message": "No such device" 00:08:26.240 } 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.240 aio_bdev 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fca61504-003c-4fbe-bad6-56edb1e93ae4 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fca61504-003c-4fbe-bad6-56edb1e93ae4 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.240 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:26.501 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fca61504-003c-4fbe-bad6-56edb1e93ae4 -t 2000 00:08:26.761 [ 00:08:26.761 { 00:08:26.761 "name": "fca61504-003c-4fbe-bad6-56edb1e93ae4", 00:08:26.761 "aliases": [ 00:08:26.761 "lvs/lvol" 00:08:26.761 ], 00:08:26.761 "product_name": "Logical Volume", 00:08:26.761 "block_size": 4096, 00:08:26.761 "num_blocks": 38912, 00:08:26.761 "uuid": "fca61504-003c-4fbe-bad6-56edb1e93ae4", 00:08:26.762 "assigned_rate_limits": { 00:08:26.762 "rw_ios_per_sec": 0, 00:08:26.762 "rw_mbytes_per_sec": 0, 00:08:26.762 "r_mbytes_per_sec": 0, 00:08:26.762 "w_mbytes_per_sec": 0 00:08:26.762 }, 00:08:26.762 "claimed": false, 00:08:26.762 "zoned": false, 00:08:26.762 "supported_io_types": { 00:08:26.762 "read": true, 00:08:26.762 "write": true, 00:08:26.762 "unmap": true, 00:08:26.762 "flush": false, 00:08:26.762 "reset": true, 00:08:26.762 "nvme_admin": false, 00:08:26.762 "nvme_io": false, 00:08:26.762 "nvme_io_md": false, 00:08:26.762 "write_zeroes": true, 00:08:26.762 "zcopy": false, 00:08:26.762 "get_zone_info": false, 00:08:26.762 "zone_management": false, 00:08:26.762 "zone_append": false, 00:08:26.762 "compare": false, 00:08:26.762 "compare_and_write": false, 00:08:26.762 "abort": false, 00:08:26.762 "seek_hole": true, 00:08:26.762 "seek_data": true, 00:08:26.762 "copy": false, 00:08:26.762 "nvme_iov_md": false 00:08:26.762 }, 00:08:26.762 "driver_specific": { 00:08:26.762 "lvol": { 00:08:26.762 "lvol_store_uuid": "e2f7912d-dac0-4960-84bc-fb398410cfc3", 00:08:26.762 "base_bdev": "aio_bdev", 00:08:26.762 "thin_provision": false, 00:08:26.762 "num_allocated_clusters": 38, 00:08:26.762 "snapshot": false, 00:08:26.762 "clone": false, 00:08:26.762 "esnap_clone": false 00:08:26.762 } 00:08:26.762 } 00:08:26.762 } 00:08:26.762 ] 00:08:26.762 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:26.762 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:26.762 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:27.023 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:27.023 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:27.023 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:27.023 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.023 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fca61504-003c-4fbe-bad6-56edb1e93ae4 00:08:27.284 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2f7912d-dac0-4960-84bc-fb398410cfc3 00:08:27.545 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:27.546 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.546 00:08:27.546 real 0m16.873s 00:08:27.546 user 0m44.583s 00:08:27.546 sys 0m3.003s 00:08:27.546 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.546 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.546 ************************************ 00:08:27.546 END TEST lvs_grow_dirty 00:08:27.546 ************************************ 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:27.807 nvmf_trace.0 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.807 rmmod nvme_tcp 00:08:27.807 rmmod nvme_fabrics 00:08:27.807 rmmod nvme_keyring 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1810669 ']' 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1810669 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1810669 ']' 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1810669 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810669 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810669' 00:08:27.807 killing process with pid 1810669 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1810669 00:08:27.807 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1810669 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.068 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.981 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:29.981 00:08:29.981 real 0m44.239s 00:08:29.981 user 1m6.682s 00:08:29.981 sys 0m10.572s 00:08:29.981 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.981 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.981 ************************************ 00:08:29.981 END TEST nvmf_lvs_grow 00:08:29.981 ************************************ 00:08:29.981 18:07:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:29.981 18:07:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.981 18:07:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.981 18:07:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.243 ************************************ 00:08:30.243 START TEST nvmf_bdev_io_wait 00:08:30.243 ************************************ 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:30.243 * Looking for test storage... 00:08:30.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.243 --rc genhtml_branch_coverage=1 00:08:30.243 --rc genhtml_function_coverage=1 00:08:30.243 --rc genhtml_legend=1 00:08:30.243 --rc geninfo_all_blocks=1 00:08:30.243 --rc geninfo_unexecuted_blocks=1 00:08:30.243 00:08:30.243 ' 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.243 --rc genhtml_branch_coverage=1 00:08:30.243 --rc genhtml_function_coverage=1 00:08:30.243 --rc genhtml_legend=1 00:08:30.243 --rc geninfo_all_blocks=1 00:08:30.243 --rc geninfo_unexecuted_blocks=1 00:08:30.243 00:08:30.243 ' 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.243 --rc genhtml_branch_coverage=1 00:08:30.243 --rc genhtml_function_coverage=1 00:08:30.243 --rc genhtml_legend=1 00:08:30.243 --rc geninfo_all_blocks=1 00:08:30.243 --rc geninfo_unexecuted_blocks=1 00:08:30.243 00:08:30.243 ' 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.243 --rc genhtml_branch_coverage=1 00:08:30.243 --rc genhtml_function_coverage=1 00:08:30.243 --rc genhtml_legend=1 00:08:30.243 --rc geninfo_all_blocks=1 00:08:30.243 --rc geninfo_unexecuted_blocks=1 00:08:30.243 00:08:30.243 ' 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.243 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.244 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.506 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:38.671 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:38.671 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:38.671 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:38.671 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:38.671 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.671 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.671 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:38.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:08:38.672 00:08:38.672 --- 10.0.0.2 ping statistics --- 00:08:38.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.672 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:08:38.672 00:08:38.672 --- 10.0.0.1 ping statistics --- 00:08:38.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.672 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1815765 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1815765 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1815765 ']' 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.672 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.672 [2024-11-19 18:07:39.237972] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:08:38.672 [2024-11-19 18:07:39.238039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.672 [2024-11-19 18:07:39.336944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.672 [2024-11-19 18:07:39.390947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.672 [2024-11-19 18:07:39.390998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.672 [2024-11-19 18:07:39.391007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.672 [2024-11-19 18:07:39.391014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.672 [2024-11-19 18:07:39.391021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.672 [2024-11-19 18:07:39.393443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.672 [2024-11-19 18:07:39.393604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.672 [2024-11-19 18:07:39.393738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.672 [2024-11-19 18:07:39.393739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.672 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 [2024-11-19 18:07:40.194312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 Malloc0 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.958 [2024-11-19 18:07:40.259922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1815821 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1815824 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.958 { 00:08:38.958 "params": { 00:08:38.958 "name": "Nvme$subsystem", 00:08:38.958 "trtype": "$TEST_TRANSPORT", 00:08:38.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.958 "adrfam": "ipv4", 00:08:38.958 "trsvcid": "$NVMF_PORT", 00:08:38.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.958 "hdgst": ${hdgst:-false}, 00:08:38.958 "ddgst": ${ddgst:-false} 00:08:38.958 }, 00:08:38.958 "method": "bdev_nvme_attach_controller" 00:08:38.958 } 00:08:38.958 EOF 00:08:38.958 )") 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1815826 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:38.958 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.959 { 00:08:38.959 "params": { 00:08:38.959 "name": "Nvme$subsystem", 00:08:38.959 "trtype": "$TEST_TRANSPORT", 00:08:38.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.959 "adrfam": "ipv4", 00:08:38.959 "trsvcid": "$NVMF_PORT", 00:08:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.959 "hdgst": ${hdgst:-false}, 00:08:38.959 "ddgst": ${ddgst:-false} 00:08:38.959 }, 00:08:38.959 "method": "bdev_nvme_attach_controller" 00:08:38.959 } 00:08:38.959 EOF 00:08:38.959 )") 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1815830 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.959 { 00:08:38.959 "params": { 00:08:38.959 "name": "Nvme$subsystem", 00:08:38.959 "trtype": "$TEST_TRANSPORT", 00:08:38.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.959 "adrfam": "ipv4", 00:08:38.959 "trsvcid": "$NVMF_PORT", 00:08:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.959 "hdgst": ${hdgst:-false}, 00:08:38.959 "ddgst": ${ddgst:-false} 00:08:38.959 }, 00:08:38.959 "method": "bdev_nvme_attach_controller" 00:08:38.959 } 00:08:38.959 EOF 00:08:38.959 )") 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.959 { 00:08:38.959 "params": { 00:08:38.959 "name": "Nvme$subsystem", 00:08:38.959 "trtype": "$TEST_TRANSPORT", 00:08:38.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.959 "adrfam": "ipv4", 00:08:38.959 "trsvcid": "$NVMF_PORT", 00:08:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.959 "hdgst": ${hdgst:-false}, 00:08:38.959 "ddgst": ${ddgst:-false} 00:08:38.959 }, 00:08:38.959 "method": "bdev_nvme_attach_controller" 00:08:38.959 } 00:08:38.959 EOF 00:08:38.959 )") 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1815821 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.959 "params": { 00:08:38.959 "name": "Nvme1", 00:08:38.959 "trtype": "tcp", 00:08:38.959 "traddr": "10.0.0.2", 00:08:38.959 "adrfam": "ipv4", 00:08:38.959 "trsvcid": "4420", 00:08:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:38.959 "hdgst": false, 00:08:38.959 "ddgst": false 00:08:38.959 }, 00:08:38.959 "method": "bdev_nvme_attach_controller" 00:08:38.959 }' 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.959 "params": { 00:08:38.959 "name": "Nvme1", 00:08:38.959 "trtype": "tcp", 00:08:38.959 "traddr": "10.0.0.2", 00:08:38.959 "adrfam": "ipv4", 00:08:38.959 "trsvcid": "4420", 00:08:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:38.959 "hdgst": false, 00:08:38.959 "ddgst": false 00:08:38.959 }, 00:08:38.959 "method": "bdev_nvme_attach_controller" 00:08:38.959 }' 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.959 "params": { 00:08:38.959 "name": "Nvme1", 00:08:38.959 "trtype": "tcp", 00:08:38.959 "traddr": "10.0.0.2", 00:08:38.959 "adrfam": "ipv4", 00:08:38.959 "trsvcid": "4420", 00:08:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:38.959 "hdgst": false, 00:08:38.959 "ddgst": false 00:08:38.959 }, 00:08:38.959 "method": "bdev_nvme_attach_controller" 00:08:38.959 }' 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:38.959 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.959 "params": { 00:08:38.959 "name": "Nvme1", 00:08:38.959 "trtype": "tcp", 00:08:38.959 "traddr": "10.0.0.2", 00:08:38.959 "adrfam": "ipv4", 00:08:38.959 "trsvcid": "4420", 00:08:38.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:38.959 "hdgst": false, 00:08:38.959 "ddgst": false 00:08:38.959 }, 00:08:38.959 "method": "bdev_nvme_attach_controller" 00:08:38.959 }' 00:08:38.959 [2024-11-19 18:07:40.321330] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:08:38.959 [2024-11-19 18:07:40.321402] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:38.959 [2024-11-19 18:07:40.323392] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:08:38.959 [2024-11-19 18:07:40.323406] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:08:38.959 [2024-11-19 18:07:40.323466] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:38.959 [2024-11-19 18:07:40.323474] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:38.959 [2024-11-19 18:07:40.323510] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:08:38.959 [2024-11-19 18:07:40.323572] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:39.246 [2024-11-19 18:07:40.547760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.246 [2024-11-19 18:07:40.589142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:39.246 [2024-11-19 18:07:40.640592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.246 [2024-11-19 18:07:40.679832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:39.533 [2024-11-19 18:07:40.705537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.533 [2024-11-19 18:07:40.744670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:39.533 [2024-11-19 18:07:40.774545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.533 [2024-11-19 18:07:40.812497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:39.533 Running I/O for 1 seconds... 00:08:39.533 Running I/O for 1 seconds... 00:08:39.811 Running I/O for 1 seconds... 00:08:39.811 Running I/O for 1 seconds... 00:08:40.486 10517.00 IOPS, 41.08 MiB/s 00:08:40.486 Latency(us) 00:08:40.486 [2024-11-19T17:07:41.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.486 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:40.486 Nvme1n1 : 1.01 10561.33 41.26 0.00 0.00 12072.62 6389.76 17913.17 00:08:40.486 [2024-11-19T17:07:41.957Z] =================================================================================================================== 00:08:40.486 [2024-11-19T17:07:41.957Z] Total : 10561.33 41.26 0.00 0.00 12072.62 6389.76 17913.17 00:08:40.771 9050.00 IOPS, 35.35 MiB/s 00:08:40.771 Latency(us) 00:08:40.771 [2024-11-19T17:07:42.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.771 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:40.771 Nvme1n1 : 1.01 9116.39 35.61 0.00 0.00 13982.58 6171.31 22282.24 00:08:40.771 [2024-11-19T17:07:42.242Z] =================================================================================================================== 00:08:40.771 [2024-11-19T17:07:42.242Z] Total : 9116.39 35.61 0.00 0.00 13982.58 6171.31 22282.24 00:08:40.771 10730.00 IOPS, 41.91 MiB/s 00:08:40.771 Latency(us) 00:08:40.771 [2024-11-19T17:07:42.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.771 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:40.771 Nvme1n1 : 1.01 10811.79 42.23 0.00 0.00 11799.30 4751.36 24029.87 00:08:40.771 [2024-11-19T17:07:42.242Z] =================================================================================================================== 00:08:40.771 [2024-11-19T17:07:42.242Z] Total : 10811.79 42.23 0.00 0.00 11799.30 4751.36 24029.87 00:08:40.771 187848.00 IOPS, 733.78 MiB/s 00:08:40.771 Latency(us) 00:08:40.771 [2024-11-19T17:07:42.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.771 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:40.771 Nvme1n1 : 1.00 187476.55 732.33 0.00 0.00 678.62 314.03 1979.73 00:08:40.771 [2024-11-19T17:07:42.242Z] =================================================================================================================== 00:08:40.771 [2024-11-19T17:07:42.242Z] Total : 187476.55 732.33 0.00 0.00 678.62 314.03 1979.73 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1815824 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1815826 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1815830 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.771 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.771 rmmod nvme_tcp 00:08:40.771 rmmod nvme_fabrics 00:08:41.032 rmmod nvme_keyring 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1815765 ']' 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1815765 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1815765 ']' 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1815765 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1815765 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1815765' 00:08:41.032 killing process with pid 1815765 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1815765 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1815765 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.032 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:43.577 00:08:43.577 real 0m13.103s 00:08:43.577 user 0m19.840s 00:08:43.577 sys 0m7.523s 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:43.577 ************************************ 00:08:43.577 END TEST nvmf_bdev_io_wait 00:08:43.577 ************************************ 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.577 ************************************ 00:08:43.577 START TEST nvmf_queue_depth 00:08:43.577 ************************************ 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:43.577 * Looking for test storage... 00:08:43.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.577 --rc genhtml_branch_coverage=1 00:08:43.577 --rc genhtml_function_coverage=1 00:08:43.577 --rc genhtml_legend=1 00:08:43.577 --rc geninfo_all_blocks=1 00:08:43.577 --rc geninfo_unexecuted_blocks=1 00:08:43.577 00:08:43.577 ' 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.577 --rc genhtml_branch_coverage=1 00:08:43.577 --rc genhtml_function_coverage=1 00:08:43.577 --rc genhtml_legend=1 00:08:43.577 --rc geninfo_all_blocks=1 00:08:43.577 --rc geninfo_unexecuted_blocks=1 00:08:43.577 00:08:43.577 ' 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.577 --rc genhtml_branch_coverage=1 00:08:43.577 --rc genhtml_function_coverage=1 00:08:43.577 --rc genhtml_legend=1 00:08:43.577 --rc geninfo_all_blocks=1 00:08:43.577 --rc geninfo_unexecuted_blocks=1 00:08:43.577 00:08:43.577 ' 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.577 --rc genhtml_branch_coverage=1 00:08:43.577 --rc genhtml_function_coverage=1 00:08:43.577 --rc genhtml_legend=1 00:08:43.577 --rc geninfo_all_blocks=1 00:08:43.577 --rc geninfo_unexecuted_blocks=1 00:08:43.577 00:08:43.577 ' 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.577 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:43.578 18:07:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.717 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:51.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:51.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:51.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:51.718 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:08:51.718 00:08:51.718 --- 10.0.0.2 ping statistics --- 00:08:51.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.718 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:08:51.718 00:08:51.718 --- 10.0.0.1 ping statistics --- 00:08:51.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.718 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1820519 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1820519 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1820519 ']' 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.718 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 [2024-11-19 18:07:52.450146] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:08:51.718 [2024-11-19 18:07:52.450223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.718 [2024-11-19 18:07:52.551075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.718 [2024-11-19 18:07:52.601563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.719 [2024-11-19 18:07:52.601613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.719 [2024-11-19 18:07:52.601623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.719 [2024-11-19 18:07:52.601631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.719 [2024-11-19 18:07:52.601638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.719 [2024-11-19 18:07:52.602395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.980 [2024-11-19 18:07:53.312407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.980 Malloc0 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.980 [2024-11-19 18:07:53.373838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1820868 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1820868 /var/tmp/bdevperf.sock 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1820868 ']' 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:51.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.980 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.980 [2024-11-19 18:07:53.432798] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:08:51.980 [2024-11-19 18:07:53.432866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820868 ] 00:08:52.240 [2024-11-19 18:07:53.524048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.240 [2024-11-19 18:07:53.576516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.812 18:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.812 18:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:52.812 18:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:52.812 18:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.812 18:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.072 NVMe0n1 00:08:53.072 18:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.072 18:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:53.332 Running I/O for 10 seconds... 00:08:55.214 11264.00 IOPS, 44.00 MiB/s [2024-11-19T17:07:57.628Z] 11273.50 IOPS, 44.04 MiB/s [2024-11-19T17:07:59.012Z] 11460.67 IOPS, 44.77 MiB/s [2024-11-19T17:07:59.953Z] 11520.25 IOPS, 45.00 MiB/s [2024-11-19T17:08:00.895Z] 11793.20 IOPS, 46.07 MiB/s [2024-11-19T17:08:01.839Z] 11985.50 IOPS, 46.82 MiB/s [2024-11-19T17:08:02.781Z] 12219.43 IOPS, 47.73 MiB/s [2024-11-19T17:08:03.725Z] 12389.00 IOPS, 48.39 MiB/s [2024-11-19T17:08:04.670Z] 12501.67 IOPS, 48.83 MiB/s [2024-11-19T17:08:04.930Z] 12600.10 IOPS, 49.22 MiB/s 00:09:03.459 Latency(us) 00:09:03.459 [2024-11-19T17:08:04.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.459 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:03.459 Verification LBA range: start 0x0 length 0x4000 00:09:03.459 NVMe0n1 : 10.09 12584.91 49.16 0.00 0.00 80786.85 24248.32 67720.53 00:09:03.459 [2024-11-19T17:08:04.930Z] =================================================================================================================== 00:09:03.459 [2024-11-19T17:08:04.930Z] Total : 12584.91 49.16 0.00 0.00 80786.85 24248.32 67720.53 00:09:03.459 { 00:09:03.459 "results": [ 00:09:03.459 { 00:09:03.459 "job": "NVMe0n1", 00:09:03.459 "core_mask": "0x1", 00:09:03.459 "workload": "verify", 00:09:03.459 "status": "finished", 00:09:03.459 "verify_range": { 00:09:03.459 "start": 0, 00:09:03.459 "length": 16384 00:09:03.459 }, 00:09:03.459 "queue_depth": 1024, 00:09:03.459 "io_size": 4096, 00:09:03.459 "runtime": 10.09105, 00:09:03.459 "iops": 12584.91435479955, 00:09:03.459 "mibps": 49.15982169843574, 00:09:03.459 "io_failed": 0, 00:09:03.459 "io_timeout": 0, 00:09:03.459 "avg_latency_us": 80786.84525805477, 00:09:03.459 "min_latency_us": 24248.32, 00:09:03.459 "max_latency_us": 67720.53333333334 00:09:03.459 } 00:09:03.459 ], 00:09:03.459 "core_count": 1 00:09:03.459 } 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1820868 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1820868 ']' 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1820868 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1820868 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1820868' 00:09:03.459 killing process with pid 1820868 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1820868 00:09:03.459 Received shutdown signal, test time was about 10.000000 seconds 00:09:03.459 00:09:03.459 Latency(us) 00:09:03.459 [2024-11-19T17:08:04.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.459 [2024-11-19T17:08:04.930Z] =================================================================================================================== 00:09:03.459 [2024-11-19T17:08:04.930Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1820868 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.459 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.459 rmmod nvme_tcp 00:09:03.459 rmmod nvme_fabrics 00:09:03.721 rmmod nvme_keyring 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1820519 ']' 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1820519 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1820519 ']' 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1820519 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.721 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1820519 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1820519' 00:09:03.721 killing process with pid 1820519 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1820519 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1820519 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.721 18:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.261 00:09:06.261 real 0m22.552s 00:09:06.261 user 0m26.114s 00:09:06.261 sys 0m6.889s 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.261 ************************************ 00:09:06.261 END TEST nvmf_queue_depth 00:09:06.261 ************************************ 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.261 ************************************ 00:09:06.261 START TEST nvmf_target_multipath 00:09:06.261 ************************************ 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:06.261 * Looking for test storage... 00:09:06.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.261 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.262 --rc genhtml_branch_coverage=1 00:09:06.262 --rc genhtml_function_coverage=1 00:09:06.262 --rc genhtml_legend=1 00:09:06.262 --rc geninfo_all_blocks=1 00:09:06.262 --rc geninfo_unexecuted_blocks=1 00:09:06.262 00:09:06.262 ' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.262 --rc genhtml_branch_coverage=1 00:09:06.262 --rc genhtml_function_coverage=1 00:09:06.262 --rc genhtml_legend=1 00:09:06.262 --rc geninfo_all_blocks=1 00:09:06.262 --rc geninfo_unexecuted_blocks=1 00:09:06.262 00:09:06.262 ' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.262 --rc genhtml_branch_coverage=1 00:09:06.262 --rc genhtml_function_coverage=1 00:09:06.262 --rc genhtml_legend=1 00:09:06.262 --rc geninfo_all_blocks=1 00:09:06.262 --rc geninfo_unexecuted_blocks=1 00:09:06.262 00:09:06.262 ' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.262 --rc genhtml_branch_coverage=1 00:09:06.262 --rc genhtml_function_coverage=1 00:09:06.262 --rc genhtml_legend=1 00:09:06.262 --rc geninfo_all_blocks=1 00:09:06.262 --rc geninfo_unexecuted_blocks=1 00:09:06.262 00:09:06.262 ' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.262 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:14.405 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:14.405 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:14.405 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:14.405 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.405 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:09:14.406 00:09:14.406 --- 10.0.0.2 ping statistics --- 00:09:14.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.406 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:09:14.406 00:09:14.406 --- 10.0.0.1 ping statistics --- 00:09:14.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.406 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.406 18:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:14.406 only one NIC for nvmf test 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.406 rmmod nvme_tcp 00:09:14.406 rmmod nvme_fabrics 00:09:14.406 rmmod nvme_keyring 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.406 18:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.791 00:09:15.791 real 0m9.915s 00:09:15.791 user 0m2.259s 00:09:15.791 sys 0m5.615s 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:15.791 ************************************ 00:09:15.791 END TEST nvmf_target_multipath 00:09:15.791 ************************************ 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.791 18:08:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.053 ************************************ 00:09:16.053 START TEST nvmf_zcopy 00:09:16.053 ************************************ 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:16.053 * Looking for test storage... 00:09:16.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.053 --rc genhtml_branch_coverage=1 00:09:16.053 --rc genhtml_function_coverage=1 00:09:16.053 --rc genhtml_legend=1 00:09:16.053 --rc geninfo_all_blocks=1 00:09:16.053 --rc geninfo_unexecuted_blocks=1 00:09:16.053 00:09:16.053 ' 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.053 --rc genhtml_branch_coverage=1 00:09:16.053 --rc genhtml_function_coverage=1 00:09:16.053 --rc genhtml_legend=1 00:09:16.053 --rc geninfo_all_blocks=1 00:09:16.053 --rc geninfo_unexecuted_blocks=1 00:09:16.053 00:09:16.053 ' 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.053 --rc genhtml_branch_coverage=1 00:09:16.053 --rc genhtml_function_coverage=1 00:09:16.053 --rc genhtml_legend=1 00:09:16.053 --rc geninfo_all_blocks=1 00:09:16.053 --rc geninfo_unexecuted_blocks=1 00:09:16.053 00:09:16.053 ' 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.053 --rc genhtml_branch_coverage=1 00:09:16.053 --rc genhtml_function_coverage=1 00:09:16.053 --rc genhtml_legend=1 00:09:16.053 --rc geninfo_all_blocks=1 00:09:16.053 --rc geninfo_unexecuted_blocks=1 00:09:16.053 00:09:16.053 ' 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.053 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.314 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.315 18:08:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:24.453 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:24.453 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:24.453 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:24.453 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.453 18:08:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:09:24.453 00:09:24.453 --- 10.0.0.2 ping statistics --- 00:09:24.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.453 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:09:24.453 00:09:24.453 --- 10.0.0.1 ping statistics --- 00:09:24.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.453 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1831555 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1831555 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1831555 ']' 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.453 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.453 [2024-11-19 18:08:25.128088] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:09:24.453 [2024-11-19 18:08:25.128153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.453 [2024-11-19 18:08:25.226224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.453 [2024-11-19 18:08:25.276477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.453 [2024-11-19 18:08:25.276531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.453 [2024-11-19 18:08:25.276539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.453 [2024-11-19 18:08:25.276547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.453 [2024-11-19 18:08:25.276553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.454 [2024-11-19 18:08:25.277366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.713 18:08:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.713 [2024-11-19 18:08:26.004445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.714 [2024-11-19 18:08:26.028703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.714 malloc0 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.714 { 00:09:24.714 "params": { 00:09:24.714 "name": "Nvme$subsystem", 00:09:24.714 "trtype": "$TEST_TRANSPORT", 00:09:24.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.714 "adrfam": "ipv4", 00:09:24.714 "trsvcid": "$NVMF_PORT", 00:09:24.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.714 "hdgst": ${hdgst:-false}, 00:09:24.714 "ddgst": ${ddgst:-false} 00:09:24.714 }, 00:09:24.714 "method": "bdev_nvme_attach_controller" 00:09:24.714 } 00:09:24.714 EOF 00:09:24.714 )") 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:24.714 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.714 "params": { 00:09:24.714 "name": "Nvme1", 00:09:24.714 "trtype": "tcp", 00:09:24.714 "traddr": "10.0.0.2", 00:09:24.714 "adrfam": "ipv4", 00:09:24.714 "trsvcid": "4420", 00:09:24.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.714 "hdgst": false, 00:09:24.714 "ddgst": false 00:09:24.714 }, 00:09:24.714 "method": "bdev_nvme_attach_controller" 00:09:24.714 }' 00:09:24.714 [2024-11-19 18:08:26.129892] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:09:24.714 [2024-11-19 18:08:26.129957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831649 ] 00:09:24.974 [2024-11-19 18:08:26.221301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.974 [2024-11-19 18:08:26.275727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.235 Running I/O for 10 seconds... 00:09:27.117 7938.00 IOPS, 62.02 MiB/s [2024-11-19T17:08:29.530Z] 8833.00 IOPS, 69.01 MiB/s [2024-11-19T17:08:30.469Z] 9146.00 IOPS, 71.45 MiB/s [2024-11-19T17:08:31.853Z] 9294.75 IOPS, 72.62 MiB/s [2024-11-19T17:08:32.795Z] 9389.00 IOPS, 73.35 MiB/s [2024-11-19T17:08:33.736Z] 9455.67 IOPS, 73.87 MiB/s [2024-11-19T17:08:34.678Z] 9500.57 IOPS, 74.22 MiB/s [2024-11-19T17:08:35.619Z] 9532.50 IOPS, 74.47 MiB/s [2024-11-19T17:08:36.560Z] 9554.22 IOPS, 74.64 MiB/s [2024-11-19T17:08:36.560Z] 9575.30 IOPS, 74.81 MiB/s 00:09:35.089 Latency(us) 00:09:35.089 [2024-11-19T17:08:36.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.089 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:35.089 Verification LBA range: start 0x0 length 0x1000 00:09:35.089 Nvme1n1 : 10.01 9575.38 74.81 0.00 0.00 13319.01 1324.37 28398.93 00:09:35.089 [2024-11-19T17:08:36.560Z] =================================================================================================================== 00:09:35.089 [2024-11-19T17:08:36.560Z] Total : 9575.38 74.81 0.00 0.00 13319.01 1324.37 28398.93 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1833769 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:35.350 { 00:09:35.350 "params": { 00:09:35.350 "name": "Nvme$subsystem", 00:09:35.350 "trtype": "$TEST_TRANSPORT", 00:09:35.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.350 "adrfam": "ipv4", 00:09:35.350 "trsvcid": "$NVMF_PORT", 00:09:35.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.350 "hdgst": ${hdgst:-false}, 00:09:35.350 "ddgst": ${ddgst:-false} 00:09:35.350 }, 00:09:35.350 "method": "bdev_nvme_attach_controller" 00:09:35.350 } 00:09:35.350 EOF 00:09:35.350 )") 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:35.350 [2024-11-19 18:08:36.586938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.586968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:35.350 18:08:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:35.350 "params": { 00:09:35.350 "name": "Nvme1", 00:09:35.350 "trtype": "tcp", 00:09:35.350 "traddr": "10.0.0.2", 00:09:35.350 "adrfam": "ipv4", 00:09:35.350 "trsvcid": "4420", 00:09:35.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.350 "hdgst": false, 00:09:35.350 "ddgst": false 00:09:35.350 }, 00:09:35.350 "method": "bdev_nvme_attach_controller" 00:09:35.350 }' 00:09:35.350 [2024-11-19 18:08:36.598935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.598945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.610964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.610974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.622994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.623003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.631269] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:09:35.350 [2024-11-19 18:08:36.631323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833769 ] 00:09:35.350 [2024-11-19 18:08:36.635025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.635036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.647054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.647063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.659086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.659094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.671114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.671122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.683143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.683151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.695178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.695187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.707205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.707213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.715629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.350 [2024-11-19 18:08:36.719234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.719242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.731264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.731273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.743295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.743307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.745032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.350 [2024-11-19 18:08:36.755327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.755336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.767359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.767373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.779386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.779396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.791416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.791425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.803445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.803453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.350 [2024-11-19 18:08:36.815477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.350 [2024-11-19 18:08:36.815485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.827523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.827542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.839548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.839559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.851579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.851591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.863608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.863619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.875638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.875647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.887674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.887689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 Running I/O for 5 seconds... 00:09:35.611 [2024-11-19 18:08:36.903231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.903248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.916251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.916268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.929766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.929782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.942988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.943003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.955784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.955799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.968349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.968364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.981098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.981113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:36.993293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:36.993309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:37.006666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:37.006681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:37.019953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:37.019968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:37.033653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:37.033667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:37.046539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:37.046554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:37.058897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:37.058912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.611 [2024-11-19 18:08:37.071551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.611 [2024-11-19 18:08:37.071566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.084285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.084300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.096944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.096960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.110358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.110373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.123449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.123464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.136834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.136849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.150534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.150549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.162965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.162980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.176137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.176152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.188873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.188888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.201184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.201199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.214615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.214631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.227752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.227766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.240618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.240633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.253583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.873 [2024-11-19 18:08:37.253598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.873 [2024-11-19 18:08:37.266681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.874 [2024-11-19 18:08:37.266696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.874 [2024-11-19 18:08:37.279403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.874 [2024-11-19 18:08:37.279418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.874 [2024-11-19 18:08:37.292440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.874 [2024-11-19 18:08:37.292455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.874 [2024-11-19 18:08:37.305307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.874 [2024-11-19 18:08:37.305322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.874 [2024-11-19 18:08:37.318486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.874 [2024-11-19 18:08:37.318501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.874 [2024-11-19 18:08:37.331330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.874 [2024-11-19 18:08:37.331345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.343902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.343917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.356769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.356783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.370404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.370419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.383491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.383506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.396514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.396529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.409571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.409585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.421995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.422010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.434441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.434457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.447334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.447349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.460865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.460880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.474228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.474242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.486781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.486800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.499835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.499850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.512486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.512500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.526095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.526110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.539243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.539259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.552005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.552020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.565269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.565284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.577977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.577992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.135 [2024-11-19 18:08:37.590458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.135 [2024-11-19 18:08:37.590473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.603398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.603413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.617027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.617042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.629799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.629814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.642754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.642769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.656131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.656145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.668986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.669002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.682328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.682344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.695611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.695627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.709183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.709198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.722813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.722829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.736198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.736218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.749574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.749590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.762984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.762999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.776443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.776458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.789799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.789815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.803179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.803194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.816247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.816263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.829886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.829901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.842811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.842826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.396 [2024-11-19 18:08:37.855739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.396 [2024-11-19 18:08:37.855754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.868655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.868671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.881446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.881462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 19035.00 IOPS, 148.71 MiB/s [2024-11-19T17:08:38.128Z] [2024-11-19 18:08:37.894187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.894202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.907618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.907633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.920657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.920673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.933291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.933307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.946224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.946239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.958389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.958405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.971413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.971428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.984139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.984165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:37.997251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:37.997266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:38.010233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:38.010249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:38.023647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:38.023662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:38.036870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:38.036885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:38.049890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:38.049905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:38.063252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:38.063267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:38.076608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:38.076624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:38.089177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:38.089192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:38.102039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:38.102054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.657 [2024-11-19 18:08:38.115492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.657 [2024-11-19 18:08:38.115508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.128333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.128348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.141237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.141252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.154618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.154633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.167556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.167571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.181126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.181141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.193718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.193732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.206848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.206863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.220381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.220397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.233783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.233798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.247067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.247081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.260562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.260577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.273126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.273142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.285756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.285772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.298892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.298908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.311932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.311948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.325137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.325153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.338218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.338233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.351645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.351660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.364718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.364733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.918 [2024-11-19 18:08:38.378145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.918 [2024-11-19 18:08:38.378167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.391474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.391489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.404980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.404995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.417757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.417772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.430667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.430682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.443783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.443799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.456878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.456894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.470375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.470390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.483687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.483702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.497167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.497182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.509683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.509698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.522512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.522526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.535929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.535944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.548761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.548776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.562302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.562318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.575675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.575690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.588947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.588962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.602262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.602276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.616058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.616072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.629861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.629876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.179 [2024-11-19 18:08:38.643768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.179 [2024-11-19 18:08:38.643783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.656072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.656087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.669505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.669519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.682793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.682808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.696601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.696615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.709281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.709296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.722645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.722661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.735108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.735123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.748141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.748157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.761686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.761701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.774582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.774597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.787243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.787259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.800392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.800407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.813520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.813535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.826165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.826180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.839474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.839489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.853071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.853086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.866339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.866354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.879396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.879411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 [2024-11-19 18:08:38.892232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.892248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.440 19127.50 IOPS, 149.43 MiB/s [2024-11-19T17:08:38.911Z] [2024-11-19 18:08:38.905259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.440 [2024-11-19 18:08:38.905274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:38.917861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:38.917876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:38.930356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:38.930370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:38.943369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:38.943384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:38.956293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:38.956308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:38.968600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:38.968619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:38.981718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:38.981733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:38.994941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:38.994956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.008531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.008546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.021174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.021190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.034412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.034428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.047800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.047815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.060723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.060738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.074105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.074121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.087275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.087291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.100733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.100748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.114190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.114205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.127282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.127297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.140714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.140729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.154184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.154200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.701 [2024-11-19 18:08:39.167993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.701 [2024-11-19 18:08:39.168008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.961 [2024-11-19 18:08:39.180483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.961 [2024-11-19 18:08:39.180498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.961 [2024-11-19 18:08:39.193963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.961 [2024-11-19 18:08:39.193978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.961 [2024-11-19 18:08:39.207456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.961 [2024-11-19 18:08:39.207470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.961 [2024-11-19 18:08:39.220302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.961 [2024-11-19 18:08:39.220321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.961 [2024-11-19 18:08:39.233092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.961 [2024-11-19 18:08:39.233107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.246236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.246250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.259606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.259622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.273290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.273304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.286537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.286552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.300003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.300018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.313221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.313236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.326328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.326343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.339615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.339630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.352648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.352663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.366242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.366257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.379870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.379884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.392450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.392464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.405573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.405588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.962 [2024-11-19 18:08:39.418010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.962 [2024-11-19 18:08:39.418025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.430664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.430678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.443377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.443392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.456327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.456342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.469779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.469798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.482926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.482941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.496474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.496489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.510141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.510157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.523171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.523186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.535771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.535786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.548481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.548497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.561884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.561898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.574627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.574642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.587627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.587642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.600235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.600250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.613256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.613271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.625944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.625959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.639048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.639064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.651966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.651981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.665129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.665143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.677659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.677673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.222 [2024-11-19 18:08:39.690225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.222 [2024-11-19 18:08:39.690240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.703545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.703560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.716872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.716891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.729781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.729797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.742420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.742436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.755621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.755636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.768814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.768830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.782508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.782523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.795339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.795355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.808392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.808408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.821064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.821080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.834246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.834262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.847455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.847471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.860632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.860647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.873519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.873537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.885905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.885921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 19186.67 IOPS, 149.90 MiB/s [2024-11-19T17:08:39.954Z] [2024-11-19 18:08:39.899492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.899507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.913092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.913108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.926412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.926427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.483 [2024-11-19 18:08:39.939790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.483 [2024-11-19 18:08:39.939806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:39.952658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:39.952673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:39.965934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:39.965950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:39.978884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:39.978900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:39.991752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:39.991767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.005520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.005536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.017779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.017794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.030723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.030739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.043547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.043563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.056777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.056793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.070030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.070046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.082640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.082656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.094981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.094997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.108089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.108104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.121492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.121507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.134598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.134615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.147961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.147976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.160815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.160831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.174101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.174117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.186619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.186634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.744 [2024-11-19 18:08:40.200171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.744 [2024-11-19 18:08:40.200187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.213181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.213197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.225954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.225970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.238943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.238958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.251417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.251432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.264719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.264734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.277948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.277963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.291302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.291318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.303708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.303724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.316257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.316272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.329308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.329323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.341870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.341885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.354404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.354419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.368028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.368043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.381113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.381128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.393680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.393695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.406458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.406473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.419925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.419940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.433549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.433563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.447014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.447034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.459564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.459579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.005 [2024-11-19 18:08:40.472262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.005 [2024-11-19 18:08:40.472277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.265 [2024-11-19 18:08:40.485774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.485790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.498967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.498982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.512149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.512167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.524996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.525010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.537459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.537474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.550340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.550355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.563621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.563635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.576167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.576182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.588833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.588847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.602301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.602315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.615600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.615615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.628233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.628248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.641122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.641136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.654728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.654743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.668253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.668269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.680641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.680657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.693741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.693760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.707155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.707173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.721027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.721042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.266 [2024-11-19 18:08:40.733704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.266 [2024-11-19 18:08:40.733718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.746533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.746548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.758810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.758825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.772190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.772206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.785794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.785809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.798953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.798968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.812524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.812538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.825977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.825992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.839370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.839385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.852860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.852875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.865875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.865889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.878814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.878828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.891439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.891454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 19203.50 IOPS, 150.03 MiB/s [2024-11-19T17:08:40.997Z] [2024-11-19 18:08:40.904589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.904604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.917935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.917949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.930553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.930568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.943194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.943213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.956424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.956440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.526 [2024-11-19 18:08:40.969418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.526 [2024-11-19 18:08:40.969433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.527 [2024-11-19 18:08:40.983247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.527 [2024-11-19 18:08:40.983262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:40.996512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:40.996527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.009135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.009149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.022730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.022744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.035751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.035766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.048814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.048829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.061730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.061744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.074461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.074476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.088327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.088342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.100902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.100917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.114589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.114604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.127467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.127482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.140387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.140401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.153646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.153661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.167110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.167125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.179312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.179327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.192214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.192229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.204514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.204529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.218106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.218121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.231587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.231602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.788 [2024-11-19 18:08:41.244891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.788 [2024-11-19 18:08:41.244906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.258614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.258630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.271467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.271482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.284101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.284116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.296456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.296470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.309661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.309676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.322286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.322301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.335019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.335035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.348438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.348454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.361526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.361542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.374434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.374449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.386860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.386875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.399616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.399631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.413283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.413299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.426535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.426551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.440134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.440150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.453318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.453334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.466133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.466149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.479393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.479409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.492983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.492998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.505765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.505780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.050 [2024-11-19 18:08:41.518458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.050 [2024-11-19 18:08:41.518473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.531496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.531512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.544575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.544591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.557952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.557968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.570994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.571009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.584398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.584414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.597253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.597269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.610954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.610969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.623762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.623777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.637071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.637086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.649958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.649973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.662844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.662859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.676275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.676291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.689878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.689893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.703195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.312 [2024-11-19 18:08:41.703210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.312 [2024-11-19 18:08:41.715826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.313 [2024-11-19 18:08:41.715842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.313 [2024-11-19 18:08:41.729392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.313 [2024-11-19 18:08:41.729407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.313 [2024-11-19 18:08:41.742678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.313 [2024-11-19 18:08:41.742693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.313 [2024-11-19 18:08:41.755798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.313 [2024-11-19 18:08:41.755814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.313 [2024-11-19 18:08:41.769313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.313 [2024-11-19 18:08:41.769329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.782105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.782120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.794845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.794861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.807471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.807486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.820690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.820705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.833352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.833367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.846371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.846387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.858955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.858970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.872826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.872842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.885317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.885332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.898801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.898817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 19206.20 IOPS, 150.05 MiB/s 00:09:40.574 Latency(us) 00:09:40.574 [2024-11-19T17:08:42.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.574 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:40.574 Nvme1n1 : 5.00 19215.24 150.12 0.00 0.00 6656.25 2730.67 17476.27 00:09:40.574 [2024-11-19T17:08:42.045Z] =================================================================================================================== 00:09:40.574 [2024-11-19T17:08:42.045Z] Total : 19215.24 150.12 0.00 0.00 6656.25 2730.67 17476.27 00:09:40.574 [2024-11-19 18:08:41.908775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.908790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.920803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.920815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.932838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.932851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.944864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.944876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.956892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.956902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.968921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.968931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.980949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.980958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:41.992985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:41.992996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 [2024-11-19 18:08:42.005012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.574 [2024-11-19 18:08:42.005021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1833769) - No such process 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1833769 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.574 delay0 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.574 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.834 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.834 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:40.834 [2024-11-19 18:08:42.176810] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:48.968 Initializing NVMe Controllers 00:09:48.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:48.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:48.968 Initialization complete. Launching workers. 00:09:48.968 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 233, failed: 33847 00:09:48.968 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33956, failed to submit 124 00:09:48.968 success 33876, unsuccessful 80, failed 0 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:48.968 rmmod nvme_tcp 00:09:48.968 rmmod nvme_fabrics 00:09:48.968 rmmod nvme_keyring 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1831555 ']' 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1831555 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1831555 ']' 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1831555 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1831555 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1831555' 00:09:48.968 killing process with pid 1831555 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1831555 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1831555 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.968 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.351 00:09:50.351 real 0m34.241s 00:09:50.351 user 0m44.995s 00:09:50.351 sys 0m11.765s 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.351 ************************************ 00:09:50.351 END TEST nvmf_zcopy 00:09:50.351 ************************************ 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.351 ************************************ 00:09:50.351 START TEST nvmf_nmic 00:09:50.351 ************************************ 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.351 * Looking for test storage... 00:09:50.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.351 --rc genhtml_branch_coverage=1 00:09:50.351 --rc genhtml_function_coverage=1 00:09:50.351 --rc genhtml_legend=1 00:09:50.351 --rc geninfo_all_blocks=1 00:09:50.351 --rc geninfo_unexecuted_blocks=1 00:09:50.351 00:09:50.351 ' 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.351 --rc genhtml_branch_coverage=1 00:09:50.351 --rc genhtml_function_coverage=1 00:09:50.351 --rc genhtml_legend=1 00:09:50.351 --rc geninfo_all_blocks=1 00:09:50.351 --rc geninfo_unexecuted_blocks=1 00:09:50.351 00:09:50.351 ' 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.351 --rc genhtml_branch_coverage=1 00:09:50.351 --rc genhtml_function_coverage=1 00:09:50.351 --rc genhtml_legend=1 00:09:50.351 --rc geninfo_all_blocks=1 00:09:50.351 --rc geninfo_unexecuted_blocks=1 00:09:50.351 00:09:50.351 ' 00:09:50.351 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.351 --rc genhtml_branch_coverage=1 00:09:50.351 --rc genhtml_function_coverage=1 00:09:50.351 --rc genhtml_legend=1 00:09:50.351 --rc geninfo_all_blocks=1 00:09:50.351 --rc geninfo_unexecuted_blocks=1 00:09:50.351 00:09:50.351 ' 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.613 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.614 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.614 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.614 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:58.754 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:58.754 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:58.754 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:58.754 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:58.754 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.755 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.755 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:58.755 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:58.755 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.755 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:09:58.755 00:09:58.755 --- 10.0.0.2 ping statistics --- 00:09:58.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.755 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:09:58.755 00:09:58.755 --- 10.0.0.1 ping statistics --- 00:09:58.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.755 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1840529 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1840529 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1840529 ']' 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.755 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.755 [2024-11-19 18:08:59.296495] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:09:58.755 [2024-11-19 18:08:59.296561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.755 [2024-11-19 18:08:59.396094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.755 [2024-11-19 18:08:59.449501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.755 [2024-11-19 18:08:59.449555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.755 [2024-11-19 18:08:59.449564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.755 [2024-11-19 18:08:59.449571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.755 [2024-11-19 18:08:59.449577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.755 [2024-11-19 18:08:59.451599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.755 [2024-11-19 18:08:59.451759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.755 [2024-11-19 18:08:59.451920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.755 [2024-11-19 18:08:59.451920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.755 [2024-11-19 18:09:00.156381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.755 Malloc0 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.755 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.016 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.016 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.016 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.016 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.016 [2024-11-19 18:09:00.231986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.016 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.016 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:59.016 test case1: single bdev can't be used in multiple subsystems 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.017 [2024-11-19 18:09:00.267904] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:59.017 [2024-11-19 18:09:00.267926] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:59.017 [2024-11-19 18:09:00.267938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.017 request: 00:09:59.017 { 00:09:59.017 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:59.017 "namespace": { 00:09:59.017 "bdev_name": "Malloc0", 00:09:59.017 "no_auto_visible": false 00:09:59.017 }, 00:09:59.017 "method": "nvmf_subsystem_add_ns", 00:09:59.017 "req_id": 1 00:09:59.017 } 00:09:59.017 Got JSON-RPC error response 00:09:59.017 response: 00:09:59.017 { 00:09:59.017 "code": -32602, 00:09:59.017 "message": "Invalid parameters" 00:09:59.017 } 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:59.017 Adding namespace failed - expected result. 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:59.017 test case2: host connect to nvmf target in multiple paths 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.017 [2024-11-19 18:09:00.280071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.017 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.404 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:02.315 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:02.315 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:02.315 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.315 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:02.315 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:04.225 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:04.225 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:04.225 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.225 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:04.225 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.225 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:04.225 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:04.225 [global] 00:10:04.225 thread=1 00:10:04.225 invalidate=1 00:10:04.225 rw=write 00:10:04.225 time_based=1 00:10:04.225 runtime=1 00:10:04.225 ioengine=libaio 00:10:04.225 direct=1 00:10:04.225 bs=4096 00:10:04.225 iodepth=1 00:10:04.225 norandommap=0 00:10:04.225 numjobs=1 00:10:04.225 00:10:04.225 verify_dump=1 00:10:04.225 verify_backlog=512 00:10:04.225 verify_state_save=0 00:10:04.225 do_verify=1 00:10:04.225 verify=crc32c-intel 00:10:04.225 [job0] 00:10:04.225 filename=/dev/nvme0n1 00:10:04.225 Could not set queue depth (nvme0n1) 00:10:04.225 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.225 fio-3.35 00:10:04.225 Starting 1 thread 00:10:05.609 00:10:05.609 job0: (groupid=0, jobs=1): err= 0: pid=1841971: Tue Nov 19 18:09:06 2024 00:10:05.609 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:05.609 slat (nsec): min=7035, max=55394, avg=24585.07, stdev=4413.78 00:10:05.609 clat (usec): min=524, max=1186, avg=940.53, stdev=105.46 00:10:05.609 lat (usec): min=550, max=1212, avg=965.12, stdev=107.05 00:10:05.609 clat percentiles (usec): 00:10:05.609 | 1.00th=[ 627], 5.00th=[ 717], 10.00th=[ 783], 20.00th=[ 873], 00:10:05.609 | 30.00th=[ 922], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:10:05.609 | 70.00th=[ 1004], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:10:05.609 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1188], 00:10:05.609 | 99.99th=[ 1188] 00:10:05.609 write: IOPS=953, BW=3812KiB/s (3904kB/s)(3816KiB/1001msec); 0 zone resets 00:10:05.609 slat (nsec): min=9282, max=64837, avg=25799.86, stdev=10985.21 00:10:05.609 clat (usec): min=197, max=841, avg=494.15, stdev=155.82 00:10:05.609 lat (usec): min=219, max=857, avg=519.95, stdev=160.56 00:10:05.609 clat percentiles (usec): 00:10:05.609 | 1.00th=[ 212], 5.00th=[ 229], 10.00th=[ 297], 20.00th=[ 334], 00:10:05.609 | 30.00th=[ 396], 40.00th=[ 420], 50.00th=[ 506], 60.00th=[ 570], 00:10:05.609 | 70.00th=[ 603], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 717], 00:10:05.609 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 840], 99.95th=[ 840], 00:10:05.609 | 99.99th=[ 840] 00:10:05.609 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:05.609 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:05.609 lat (usec) : 250=4.64%, 500=26.81%, 750=35.54%, 1000=21.49% 00:10:05.609 lat (msec) : 2=11.53% 00:10:05.609 cpu : usr=2.40%, sys=3.40%, ctx=1466, majf=0, minf=1 00:10:05.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.609 issued rwts: total=512,954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.609 00:10:05.609 Run status group 0 (all jobs): 00:10:05.609 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:05.609 WRITE: bw=3812KiB/s (3904kB/s), 3812KiB/s-3812KiB/s (3904kB/s-3904kB/s), io=3816KiB (3908kB), run=1001-1001msec 00:10:05.609 00:10:05.609 Disk stats (read/write): 00:10:05.609 nvme0n1: ios=562/709, merge=0/0, ticks=880/369, in_queue=1249, util=98.20% 00:10:05.609 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:05.609 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.609 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:05.609 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:05.609 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.609 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:05.609 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.609 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:05.609 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:05.609 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:05.609 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.609 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.610 rmmod nvme_tcp 00:10:05.610 rmmod nvme_fabrics 00:10:05.610 rmmod nvme_keyring 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1840529 ']' 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1840529 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1840529 ']' 00:10:05.610 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1840529 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1840529 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1840529' 00:10:05.870 killing process with pid 1840529 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1840529 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1840529 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.870 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.415 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.415 00:10:08.415 real 0m17.733s 00:10:08.415 user 0m45.128s 00:10:08.415 sys 0m6.502s 00:10:08.415 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.415 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.415 ************************************ 00:10:08.415 END TEST nvmf_nmic 00:10:08.415 ************************************ 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.416 ************************************ 00:10:08.416 START TEST nvmf_fio_target 00:10:08.416 ************************************ 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:08.416 * Looking for test storage... 00:10:08.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.416 --rc genhtml_branch_coverage=1 00:10:08.416 --rc genhtml_function_coverage=1 00:10:08.416 --rc genhtml_legend=1 00:10:08.416 --rc geninfo_all_blocks=1 00:10:08.416 --rc geninfo_unexecuted_blocks=1 00:10:08.416 00:10:08.416 ' 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.416 --rc genhtml_branch_coverage=1 00:10:08.416 --rc genhtml_function_coverage=1 00:10:08.416 --rc genhtml_legend=1 00:10:08.416 --rc geninfo_all_blocks=1 00:10:08.416 --rc geninfo_unexecuted_blocks=1 00:10:08.416 00:10:08.416 ' 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.416 --rc genhtml_branch_coverage=1 00:10:08.416 --rc genhtml_function_coverage=1 00:10:08.416 --rc genhtml_legend=1 00:10:08.416 --rc geninfo_all_blocks=1 00:10:08.416 --rc geninfo_unexecuted_blocks=1 00:10:08.416 00:10:08.416 ' 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.416 --rc genhtml_branch_coverage=1 00:10:08.416 --rc genhtml_function_coverage=1 00:10:08.416 --rc genhtml_legend=1 00:10:08.416 --rc geninfo_all_blocks=1 00:10:08.416 --rc geninfo_unexecuted_blocks=1 00:10:08.416 00:10:08.416 ' 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:08.416 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.417 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.566 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.566 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.566 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.566 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.566 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:16.567 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:16.567 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:16.567 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:16.567 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.567 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.567 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.567 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.567 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.567 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:10:16.567 00:10:16.567 --- 10.0.0.2 ping statistics --- 00:10:16.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.567 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:10:16.567 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:10:16.567 00:10:16.567 --- 10.0.0.1 ping statistics --- 00:10:16.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.567 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:10:16.567 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.567 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:16.567 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.567 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1847075 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1847075 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1847075 ']' 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.568 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.568 [2024-11-19 18:09:17.221614] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:10:16.568 [2024-11-19 18:09:17.221681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.568 [2024-11-19 18:09:17.324644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.568 [2024-11-19 18:09:17.376791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.568 [2024-11-19 18:09:17.376840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.568 [2024-11-19 18:09:17.376848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.568 [2024-11-19 18:09:17.376856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.568 [2024-11-19 18:09:17.376863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.568 [2024-11-19 18:09:17.379264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.568 [2024-11-19 18:09:17.379543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.568 [2024-11-19 18:09:17.379704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.568 [2024-11-19 18:09:17.379706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.830 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.830 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:16.830 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.830 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.830 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.830 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.830 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:16.830 [2024-11-19 18:09:18.255887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.830 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.092 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:17.092 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.353 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:17.353 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.614 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:17.614 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.874 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:17.874 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:18.137 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.137 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:18.137 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.397 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:18.397 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.657 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:18.657 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:18.917 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.917 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:18.917 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:19.177 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:19.178 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:19.439 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.439 [2024-11-19 18:09:20.851774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.439 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:19.699 18:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:19.960 18:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.344 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:21.345 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:21.345 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.345 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:21.345 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:21.345 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:23.888 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:23.888 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:23.888 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.888 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:23.888 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.888 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:23.888 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:23.888 [global] 00:10:23.888 thread=1 00:10:23.888 invalidate=1 00:10:23.888 rw=write 00:10:23.888 time_based=1 00:10:23.888 runtime=1 00:10:23.888 ioengine=libaio 00:10:23.888 direct=1 00:10:23.888 bs=4096 00:10:23.888 iodepth=1 00:10:23.888 norandommap=0 00:10:23.888 numjobs=1 00:10:23.888 00:10:23.888 verify_dump=1 00:10:23.888 verify_backlog=512 00:10:23.888 verify_state_save=0 00:10:23.888 do_verify=1 00:10:23.888 verify=crc32c-intel 00:10:23.888 [job0] 00:10:23.888 filename=/dev/nvme0n1 00:10:23.888 [job1] 00:10:23.888 filename=/dev/nvme0n2 00:10:23.888 [job2] 00:10:23.888 filename=/dev/nvme0n3 00:10:23.888 [job3] 00:10:23.888 filename=/dev/nvme0n4 00:10:23.888 Could not set queue depth (nvme0n1) 00:10:23.888 Could not set queue depth (nvme0n2) 00:10:23.888 Could not set queue depth (nvme0n3) 00:10:23.888 Could not set queue depth (nvme0n4) 00:10:23.888 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.888 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.888 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.888 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.888 fio-3.35 00:10:23.888 Starting 4 threads 00:10:25.274 00:10:25.274 job0: (groupid=0, jobs=1): err= 0: pid=1848786: Tue Nov 19 18:09:26 2024 00:10:25.274 read: IOPS=17, BW=69.6KiB/s (71.2kB/s)(72.0KiB/1035msec) 00:10:25.274 slat (nsec): min=25946, max=31454, avg=26688.22, stdev=1271.22 00:10:25.274 clat (usec): min=40887, max=42593, avg=41527.12, stdev=539.42 00:10:25.274 lat (usec): min=40915, max=42621, avg=41553.81, stdev=539.49 00:10:25.274 clat percentiles (usec): 00:10:25.274 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:25.274 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:25.274 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:25.274 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:25.274 | 99.99th=[42730] 00:10:25.274 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:10:25.274 slat (nsec): min=9021, max=53168, avg=29221.43, stdev=10094.84 00:10:25.274 clat (usec): min=147, max=973, avg=524.97, stdev=133.76 00:10:25.274 lat (usec): min=158, max=1007, avg=554.19, stdev=137.65 00:10:25.274 clat percentiles (usec): 00:10:25.274 | 1.00th=[ 281], 5.00th=[ 326], 10.00th=[ 359], 20.00th=[ 416], 00:10:25.274 | 30.00th=[ 445], 40.00th=[ 478], 50.00th=[ 515], 60.00th=[ 553], 00:10:25.274 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 709], 95.00th=[ 783], 00:10:25.274 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 971], 99.95th=[ 971], 00:10:25.274 | 99.99th=[ 971] 00:10:25.274 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:10:25.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:25.274 lat (usec) : 250=0.19%, 500=43.58%, 750=46.23%, 1000=6.60% 00:10:25.274 lat (msec) : 50=3.40% 00:10:25.274 cpu : usr=1.26%, sys=1.55%, ctx=530, majf=0, minf=1 00:10:25.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.274 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.274 job1: (groupid=0, jobs=1): err= 0: pid=1848796: Tue Nov 19 18:09:26 2024 00:10:25.274 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1010msec) 00:10:25.274 slat (nsec): min=25742, max=26375, avg=26098.18, stdev=166.83 00:10:25.274 clat (usec): min=40916, max=41077, avg=40973.05, stdev=37.09 00:10:25.274 lat (usec): min=40943, max=41103, avg=40999.15, stdev=37.06 00:10:25.274 clat percentiles (usec): 00:10:25.274 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:25.274 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:25.274 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:25.274 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:25.274 | 99.99th=[41157] 00:10:25.274 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:25.274 slat (usec): min=9, max=2930, avg=37.78, stdev=149.31 00:10:25.274 clat (usec): min=199, max=875, avg=566.46, stdev=149.97 00:10:25.274 lat (usec): min=229, max=3658, avg=604.24, stdev=218.96 00:10:25.274 clat percentiles (usec): 00:10:25.274 | 1.00th=[ 219], 5.00th=[ 318], 10.00th=[ 359], 20.00th=[ 437], 00:10:25.274 | 30.00th=[ 474], 40.00th=[ 523], 50.00th=[ 586], 60.00th=[ 619], 00:10:25.274 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 783], 00:10:25.274 | 99.00th=[ 832], 99.50th=[ 840], 99.90th=[ 873], 99.95th=[ 873], 00:10:25.274 | 99.99th=[ 873] 00:10:25.274 bw ( KiB/s): min= 4104, max= 4104, per=51.85%, avg=4104.00, stdev= 0.00, samples=1 00:10:25.274 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:10:25.274 lat (usec) : 250=1.51%, 500=32.51%, 750=51.04%, 1000=11.72% 00:10:25.274 lat (msec) : 50=3.21% 00:10:25.274 cpu : usr=1.09%, sys=1.09%, ctx=534, majf=0, minf=1 00:10:25.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.274 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.274 job2: (groupid=0, jobs=1): err= 0: pid=1848815: Tue Nov 19 18:09:26 2024 00:10:25.274 read: IOPS=190, BW=763KiB/s (782kB/s)(764KiB/1001msec) 00:10:25.274 slat (nsec): min=9994, max=60386, avg=27008.86, stdev=3878.24 00:10:25.274 clat (usec): min=901, max=41250, avg=3389.67, stdev=9339.80 00:10:25.274 lat (usec): min=928, max=41260, avg=3416.67, stdev=9339.31 00:10:25.274 clat percentiles (usec): 00:10:25.274 | 1.00th=[ 930], 5.00th=[ 1004], 10.00th=[ 1029], 20.00th=[ 1045], 00:10:25.274 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:10:25.274 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[41157], 00:10:25.274 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:25.274 | 99.99th=[41157] 00:10:25.274 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:25.274 slat (nsec): min=9533, max=68876, avg=31081.19, stdev=8614.92 00:10:25.274 clat (usec): min=196, max=1043, avg=637.57, stdev=135.09 00:10:25.274 lat (usec): min=206, max=1077, avg=668.65, stdev=139.21 00:10:25.274 clat percentiles (usec): 00:10:25.274 | 1.00th=[ 330], 5.00th=[ 412], 10.00th=[ 449], 20.00th=[ 529], 00:10:25.274 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 685], 00:10:25.274 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 848], 00:10:25.274 | 99.00th=[ 955], 99.50th=[ 1004], 99.90th=[ 1045], 99.95th=[ 1045], 00:10:25.274 | 99.99th=[ 1045] 00:10:25.274 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:10:25.274 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:25.274 lat (usec) : 250=0.28%, 500=11.24%, 750=47.23%, 1000=14.94% 00:10:25.274 lat (msec) : 2=24.75%, 50=1.56% 00:10:25.274 cpu : usr=2.30%, sys=1.90%, ctx=703, majf=0, minf=1 00:10:25.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.274 issued rwts: total=191,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.274 job3: (groupid=0, jobs=1): err= 0: pid=1848822: Tue Nov 19 18:09:26 2024 00:10:25.274 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1029msec) 00:10:25.274 slat (nsec): min=26873, max=28562, avg=27269.53, stdev=390.51 00:10:25.274 clat (usec): min=40913, max=42017, avg=41676.78, stdev=450.59 00:10:25.274 lat (usec): min=40940, max=42045, avg=41704.05, stdev=450.54 00:10:25.274 clat percentiles (usec): 00:10:25.274 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:25.274 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:25.274 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:25.274 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:25.274 | 99.99th=[42206] 00:10:25.274 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:25.274 slat (nsec): min=9188, max=67764, avg=29583.33, stdev=10680.75 00:10:25.274 clat (usec): min=245, max=1573, avg=589.07, stdev=148.16 00:10:25.275 lat (usec): min=255, max=1584, avg=618.65, stdev=153.84 00:10:25.275 clat percentiles (usec): 00:10:25.275 | 1.00th=[ 281], 5.00th=[ 326], 10.00th=[ 359], 20.00th=[ 465], 00:10:25.275 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:10:25.275 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 799], 00:10:25.275 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 1582], 99.95th=[ 1582], 00:10:25.275 | 99.99th=[ 1582] 00:10:25.275 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:10:25.275 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:25.275 lat (usec) : 250=0.38%, 500=24.20%, 750=61.44%, 1000=10.59% 00:10:25.275 lat (msec) : 2=0.19%, 50=3.21% 00:10:25.275 cpu : usr=1.07%, sys=1.75%, ctx=530, majf=0, minf=1 00:10:25.275 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.275 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.275 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.275 00:10:25.275 Run status group 0 (all jobs): 00:10:25.275 READ: bw=939KiB/s (962kB/s), 66.1KiB/s-763KiB/s (67.7kB/s-782kB/s), io=972KiB (995kB), run=1001-1035msec 00:10:25.275 WRITE: bw=7915KiB/s (8105kB/s), 1979KiB/s-2046KiB/s (2026kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1035msec 00:10:25.275 00:10:25.275 Disk stats (read/write): 00:10:25.275 nvme0n1: ios=63/512, merge=0/0, ticks=606/214, in_queue=820, util=87.07% 00:10:25.275 nvme0n2: ios=111/512, merge=0/0, ticks=801/288, in_queue=1089, util=97.24% 00:10:25.275 nvme0n3: ios=168/512, merge=0/0, ticks=957/251, in_queue=1208, util=91.85% 00:10:25.275 nvme0n4: ios=12/512, merge=0/0, ticks=501/235, in_queue=736, util=89.51% 00:10:25.275 18:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:25.275 [global] 00:10:25.275 thread=1 00:10:25.275 invalidate=1 00:10:25.275 rw=randwrite 00:10:25.275 time_based=1 00:10:25.275 runtime=1 00:10:25.275 ioengine=libaio 00:10:25.275 direct=1 00:10:25.275 bs=4096 00:10:25.275 iodepth=1 00:10:25.275 norandommap=0 00:10:25.275 numjobs=1 00:10:25.275 00:10:25.275 verify_dump=1 00:10:25.275 verify_backlog=512 00:10:25.275 verify_state_save=0 00:10:25.275 do_verify=1 00:10:25.275 verify=crc32c-intel 00:10:25.275 [job0] 00:10:25.275 filename=/dev/nvme0n1 00:10:25.275 [job1] 00:10:25.275 filename=/dev/nvme0n2 00:10:25.275 [job2] 00:10:25.275 filename=/dev/nvme0n3 00:10:25.275 [job3] 00:10:25.275 filename=/dev/nvme0n4 00:10:25.275 Could not set queue depth (nvme0n1) 00:10:25.275 Could not set queue depth (nvme0n2) 00:10:25.275 Could not set queue depth (nvme0n3) 00:10:25.275 Could not set queue depth (nvme0n4) 00:10:25.535 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.535 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.535 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.535 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.535 fio-3.35 00:10:25.535 Starting 4 threads 00:10:26.917 00:10:26.917 job0: (groupid=0, jobs=1): err= 0: pid=1849260: Tue Nov 19 18:09:28 2024 00:10:26.917 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:26.917 slat (nsec): min=7872, max=59149, avg=25432.67, stdev=3910.19 00:10:26.917 clat (usec): min=604, max=1414, avg=1174.69, stdev=129.78 00:10:26.917 lat (usec): min=629, max=1439, avg=1200.12, stdev=129.91 00:10:26.917 clat percentiles (usec): 00:10:26.917 | 1.00th=[ 742], 5.00th=[ 938], 10.00th=[ 1004], 20.00th=[ 1074], 00:10:26.917 | 30.00th=[ 1123], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1237], 00:10:26.917 | 70.00th=[ 1254], 80.00th=[ 1287], 90.00th=[ 1303], 95.00th=[ 1319], 00:10:26.917 | 99.00th=[ 1369], 99.50th=[ 1401], 99.90th=[ 1418], 99.95th=[ 1418], 00:10:26.917 | 99.99th=[ 1418] 00:10:26.917 write: IOPS=528, BW=2114KiB/s (2165kB/s)(2116KiB/1001msec); 0 zone resets 00:10:26.917 slat (nsec): min=9216, max=54804, avg=28881.72, stdev=8416.31 00:10:26.917 clat (usec): min=217, max=938, avg=684.30, stdev=119.18 00:10:26.917 lat (usec): min=237, max=992, avg=713.18, stdev=121.87 00:10:26.917 clat percentiles (usec): 00:10:26.917 | 1.00th=[ 322], 5.00th=[ 457], 10.00th=[ 529], 20.00th=[ 594], 00:10:26.917 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 725], 00:10:26.917 | 70.00th=[ 758], 80.00th=[ 783], 90.00th=[ 816], 95.00th=[ 848], 00:10:26.917 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 938], 99.95th=[ 938], 00:10:26.917 | 99.99th=[ 938] 00:10:26.917 bw ( KiB/s): min= 4096, max= 4096, per=45.34%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.917 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.917 lat (usec) : 250=0.19%, 500=3.27%, 750=31.70%, 1000=20.37% 00:10:26.917 lat (msec) : 2=44.48% 00:10:26.917 cpu : usr=1.40%, sys=3.20%, ctx=1041, majf=0, minf=1 00:10:26.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.918 issued rwts: total=512,529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.918 job1: (groupid=0, jobs=1): err= 0: pid=1849276: Tue Nov 19 18:09:28 2024 00:10:26.918 read: IOPS=16, BW=66.0KiB/s (67.5kB/s)(68.0KiB/1031msec) 00:10:26.918 slat (nsec): min=25642, max=26733, avg=25965.29, stdev=240.92 00:10:26.918 clat (usec): min=41830, max=42124, avg=41965.99, stdev=80.28 00:10:26.918 lat (usec): min=41856, max=42150, avg=41991.95, stdev=80.28 00:10:26.918 clat percentiles (usec): 00:10:26.918 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:26.918 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:26.918 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:26.918 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:26.918 | 99.99th=[42206] 00:10:26.918 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:26.918 slat (nsec): min=8701, max=54270, avg=27023.67, stdev=10204.63 00:10:26.918 clat (usec): min=130, max=875, avg=584.52, stdev=137.14 00:10:26.918 lat (usec): min=164, max=907, avg=611.55, stdev=143.23 00:10:26.918 clat percentiles (usec): 00:10:26.918 | 1.00th=[ 255], 5.00th=[ 293], 10.00th=[ 383], 20.00th=[ 478], 00:10:26.918 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:10:26.918 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 775], 00:10:26.918 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 873], 99.95th=[ 873], 00:10:26.918 | 99.99th=[ 873] 00:10:26.918 bw ( KiB/s): min= 4096, max= 4096, per=45.34%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.918 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.918 lat (usec) : 250=0.95%, 500=22.31%, 750=65.97%, 1000=7.56% 00:10:26.918 lat (msec) : 50=3.21% 00:10:26.918 cpu : usr=1.65%, sys=1.07%, ctx=529, majf=0, minf=1 00:10:26.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.918 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.918 job2: (groupid=0, jobs=1): err= 0: pid=1849298: Tue Nov 19 18:09:28 2024 00:10:26.918 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:26.918 slat (nsec): min=6450, max=46449, avg=26485.00, stdev=4516.43 00:10:26.918 clat (usec): min=527, max=1119, avg=945.94, stdev=83.72 00:10:26.918 lat (usec): min=553, max=1146, avg=972.42, stdev=84.65 00:10:26.918 clat percentiles (usec): 00:10:26.918 | 1.00th=[ 668], 5.00th=[ 799], 10.00th=[ 832], 20.00th=[ 889], 00:10:26.918 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:10:26.918 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:10:26.918 | 99.00th=[ 1106], 99.50th=[ 1106], 99.90th=[ 1123], 99.95th=[ 1123], 00:10:26.918 | 99.99th=[ 1123] 00:10:26.918 write: IOPS=779, BW=3117KiB/s (3192kB/s)(3120KiB/1001msec); 0 zone resets 00:10:26.918 slat (nsec): min=9169, max=66692, avg=29804.42, stdev=9207.19 00:10:26.918 clat (usec): min=224, max=965, avg=601.14, stdev=107.87 00:10:26.918 lat (usec): min=257, max=982, avg=630.95, stdev=111.04 00:10:26.918 clat percentiles (usec): 00:10:26.918 | 1.00th=[ 355], 5.00th=[ 420], 10.00th=[ 457], 20.00th=[ 502], 00:10:26.918 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:10:26.918 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 758], 00:10:26.918 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 963], 99.95th=[ 963], 00:10:26.918 | 99.99th=[ 963] 00:10:26.918 bw ( KiB/s): min= 4096, max= 4096, per=45.34%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.918 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.918 lat (usec) : 250=0.08%, 500=11.76%, 750=45.74%, 1000=32.35% 00:10:26.918 lat (msec) : 2=10.06% 00:10:26.918 cpu : usr=2.50%, sys=4.90%, ctx=1292, majf=0, minf=1 00:10:26.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.918 issued rwts: total=512,780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.918 job3: (groupid=0, jobs=1): err= 0: pid=1849305: Tue Nov 19 18:09:28 2024 00:10:26.918 read: IOPS=474, BW=1897KiB/s (1943kB/s)(1960KiB/1033msec) 00:10:26.918 slat (nsec): min=7576, max=39988, avg=25642.98, stdev=1933.75 00:10:26.918 clat (usec): min=543, max=41942, avg=1391.02, stdev=4065.13 00:10:26.918 lat (usec): min=569, max=41969, avg=1416.66, stdev=4065.16 00:10:26.918 clat percentiles (usec): 00:10:26.918 | 1.00th=[ 652], 5.00th=[ 791], 10.00th=[ 832], 20.00th=[ 914], 00:10:26.918 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1012], 00:10:26.918 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:10:26.918 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:26.918 | 99.99th=[41681] 00:10:26.918 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:10:26.918 slat (nsec): min=9564, max=52140, avg=28482.46, stdev=9083.09 00:10:26.918 clat (usec): min=148, max=922, avg=616.18, stdev=122.72 00:10:26.918 lat (usec): min=159, max=968, avg=644.67, stdev=127.00 00:10:26.918 clat percentiles (usec): 00:10:26.918 | 1.00th=[ 330], 5.00th=[ 375], 10.00th=[ 441], 20.00th=[ 519], 00:10:26.918 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:10:26.918 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 783], 00:10:26.918 | 99.00th=[ 857], 99.50th=[ 857], 99.90th=[ 922], 99.95th=[ 922], 00:10:26.918 | 99.99th=[ 922] 00:10:26.918 bw ( KiB/s): min= 4096, max= 4096, per=45.34%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.918 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.918 lat (usec) : 250=0.30%, 500=8.48%, 750=38.02%, 1000=31.04% 00:10:26.918 lat (msec) : 2=21.66%, 50=0.50% 00:10:26.918 cpu : usr=1.74%, sys=2.52%, ctx=1002, majf=0, minf=1 00:10:26.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.918 issued rwts: total=490,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.918 00:10:26.918 Run status group 0 (all jobs): 00:10:26.918 READ: bw=5928KiB/s (6071kB/s), 66.0KiB/s-2046KiB/s (67.5kB/s-2095kB/s), io=6124KiB (6271kB), run=1001-1033msec 00:10:26.918 WRITE: bw=9034KiB/s (9251kB/s), 1983KiB/s-3117KiB/s (2030kB/s-3192kB/s), io=9332KiB (9556kB), run=1001-1033msec 00:10:26.918 00:10:26.918 Disk stats (read/write): 00:10:26.918 nvme0n1: ios=437/512, merge=0/0, ticks=492/333, in_queue=825, util=87.47% 00:10:26.918 nvme0n2: ios=46/512, merge=0/0, ticks=562/229, in_queue=791, util=87.67% 00:10:26.918 nvme0n3: ios=512/515, merge=0/0, ticks=455/248, in_queue=703, util=88.40% 00:10:26.918 nvme0n4: ios=485/512, merge=0/0, ticks=481/302, in_queue=783, util=89.53% 00:10:26.918 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:26.918 [global] 00:10:26.918 thread=1 00:10:26.918 invalidate=1 00:10:26.918 rw=write 00:10:26.918 time_based=1 00:10:26.918 runtime=1 00:10:26.918 ioengine=libaio 00:10:26.918 direct=1 00:10:26.918 bs=4096 00:10:26.918 iodepth=128 00:10:26.918 norandommap=0 00:10:26.918 numjobs=1 00:10:26.918 00:10:26.918 verify_dump=1 00:10:26.918 verify_backlog=512 00:10:26.918 verify_state_save=0 00:10:26.918 do_verify=1 00:10:26.918 verify=crc32c-intel 00:10:26.918 [job0] 00:10:26.918 filename=/dev/nvme0n1 00:10:26.918 [job1] 00:10:26.918 filename=/dev/nvme0n2 00:10:26.918 [job2] 00:10:26.918 filename=/dev/nvme0n3 00:10:26.918 [job3] 00:10:26.918 filename=/dev/nvme0n4 00:10:26.918 Could not set queue depth (nvme0n1) 00:10:26.918 Could not set queue depth (nvme0n2) 00:10:26.918 Could not set queue depth (nvme0n3) 00:10:26.918 Could not set queue depth (nvme0n4) 00:10:27.178 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.178 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.178 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.178 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.178 fio-3.35 00:10:27.178 Starting 4 threads 00:10:28.560 00:10:28.560 job0: (groupid=0, jobs=1): err= 0: pid=1849749: Tue Nov 19 18:09:29 2024 00:10:28.560 read: IOPS=7812, BW=30.5MiB/s (32.0MB/s)(30.7MiB/1005msec) 00:10:28.560 slat (nsec): min=890, max=15761k, avg=64166.24, stdev=508581.73 00:10:28.560 clat (usec): min=1808, max=27856, avg=8771.67, stdev=2506.56 00:10:28.560 lat (usec): min=3229, max=27858, avg=8835.84, stdev=2530.44 00:10:28.560 clat percentiles (usec): 00:10:28.560 | 1.00th=[ 3523], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7373], 00:10:28.560 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8455], 00:10:28.560 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[11600], 95.00th=[12780], 00:10:28.560 | 99.00th=[19268], 99.50th=[21890], 99.90th=[26346], 99.95th=[26346], 00:10:28.560 | 99.99th=[27919] 00:10:28.560 write: IOPS=8151, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1005msec); 0 zone resets 00:10:28.560 slat (nsec): min=1558, max=6411.1k, avg=52622.30, stdev=327855.51 00:10:28.560 clat (usec): min=1195, max=14772, avg=7150.53, stdev=1563.42 00:10:28.560 lat (usec): min=1206, max=14775, avg=7203.15, stdev=1586.32 00:10:28.560 clat percentiles (usec): 00:10:28.560 | 1.00th=[ 2671], 5.00th=[ 3982], 10.00th=[ 4621], 20.00th=[ 6194], 00:10:28.560 | 30.00th=[ 6915], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 7832], 00:10:28.560 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:10:28.560 | 99.00th=[10945], 99.50th=[11994], 99.90th=[14353], 99.95th=[14484], 00:10:28.560 | 99.99th=[14746] 00:10:28.560 bw ( KiB/s): min=32768, max=32768, per=29.71%, avg=32768.00, stdev= 0.00, samples=2 00:10:28.560 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:10:28.560 lat (msec) : 2=0.19%, 4=3.57%, 10=85.05%, 20=10.74%, 50=0.45% 00:10:28.560 cpu : usr=4.98%, sys=7.37%, ctx=771, majf=0, minf=1 00:10:28.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:28.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.560 issued rwts: total=7852,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.560 job1: (groupid=0, jobs=1): err= 0: pid=1849754: Tue Nov 19 18:09:29 2024 00:10:28.560 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:10:28.560 slat (nsec): min=887, max=5602.0k, avg=78381.20, stdev=433626.16 00:10:28.560 clat (usec): min=6272, max=29838, avg=10022.48, stdev=1849.88 00:10:28.560 lat (usec): min=6274, max=31153, avg=10100.86, stdev=1882.48 00:10:28.560 clat percentiles (usec): 00:10:28.560 | 1.00th=[ 7504], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9241], 00:10:28.560 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:10:28.560 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10945], 95.00th=[11863], 00:10:28.560 | 99.00th=[15139], 99.50th=[29230], 99.90th=[29754], 99.95th=[29754], 00:10:28.560 | 99.99th=[29754] 00:10:28.560 write: IOPS=6219, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1004msec); 0 zone resets 00:10:28.560 slat (nsec): min=1519, max=15636k, avg=79704.71, stdev=536692.71 00:10:28.560 clat (usec): min=3498, max=42148, avg=10424.05, stdev=4634.52 00:10:28.560 lat (usec): min=3844, max=42179, avg=10503.75, stdev=4680.96 00:10:28.560 clat percentiles (usec): 00:10:28.560 | 1.00th=[ 6652], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8356], 00:10:28.560 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9372], 00:10:28.560 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[16188], 95.00th=[23200], 00:10:28.560 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[34341], 00:10:28.561 | 99.99th=[42206] 00:10:28.561 bw ( KiB/s): min=24576, max=24632, per=22.31%, avg=24604.00, stdev=39.60, samples=2 00:10:28.561 iops : min= 6144, max= 6158, avg=6151.00, stdev= 9.90, samples=2 00:10:28.561 lat (msec) : 4=0.09%, 10=68.64%, 20=27.52%, 50=3.75% 00:10:28.561 cpu : usr=2.79%, sys=4.19%, ctx=792, majf=0, minf=2 00:10:28.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:28.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.561 issued rwts: total=6144,6244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.561 job2: (groupid=0, jobs=1): err= 0: pid=1849762: Tue Nov 19 18:09:29 2024 00:10:28.561 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:10:28.561 slat (nsec): min=978, max=4803.8k, avg=78493.54, stdev=498537.01 00:10:28.561 clat (usec): min=2826, max=14835, avg=9670.71, stdev=1319.81 00:10:28.561 lat (usec): min=2829, max=14844, avg=9749.20, stdev=1377.20 00:10:28.561 clat percentiles (usec): 00:10:28.561 | 1.00th=[ 6456], 5.00th=[ 7242], 10.00th=[ 8029], 20.00th=[ 8848], 00:10:28.561 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:10:28.561 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10814], 95.00th=[12256], 00:10:28.561 | 99.00th=[13698], 99.50th=[14091], 99.90th=[14484], 99.95th=[14615], 00:10:28.561 | 99.99th=[14877] 00:10:28.561 write: IOPS=6664, BW=26.0MiB/s (27.3MB/s)(26.1MiB/1003msec); 0 zone resets 00:10:28.561 slat (nsec): min=1647, max=4558.3k, avg=67045.76, stdev=227935.74 00:10:28.561 clat (usec): min=2637, max=14097, avg=9344.27, stdev=1209.78 00:10:28.561 lat (usec): min=2642, max=14594, avg=9411.32, stdev=1218.23 00:10:28.561 clat percentiles (usec): 00:10:28.561 | 1.00th=[ 5866], 5.00th=[ 7701], 10.00th=[ 8225], 20.00th=[ 8717], 00:10:28.561 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:10:28.561 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[11600], 00:10:28.561 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13829], 99.95th=[13960], 00:10:28.561 | 99.99th=[14091] 00:10:28.561 bw ( KiB/s): min=24592, max=28656, per=24.14%, avg=26624.00, stdev=2873.68, samples=2 00:10:28.561 iops : min= 6148, max= 7164, avg=6656.00, stdev=718.42, samples=2 00:10:28.561 lat (msec) : 4=0.28%, 10=73.67%, 20=26.06% 00:10:28.561 cpu : usr=3.89%, sys=5.19%, ctx=952, majf=0, minf=1 00:10:28.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:28.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.561 issued rwts: total=6656,6684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.561 job3: (groupid=0, jobs=1): err= 0: pid=1849769: Tue Nov 19 18:09:29 2024 00:10:28.561 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:10:28.561 slat (nsec): min=985, max=10763k, avg=83452.71, stdev=594616.59 00:10:28.561 clat (usec): min=3947, max=20827, avg=10500.27, stdev=2453.80 00:10:28.561 lat (usec): min=3951, max=20857, avg=10583.72, stdev=2495.15 00:10:28.561 clat percentiles (usec): 00:10:28.561 | 1.00th=[ 5211], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[ 8979], 00:10:28.561 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:10:28.561 | 70.00th=[10945], 80.00th=[11863], 90.00th=[14091], 95.00th=[16057], 00:10:28.561 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19268], 99.95th=[19530], 00:10:28.561 | 99.99th=[20841] 00:10:28.561 write: IOPS=6599, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1007msec); 0 zone resets 00:10:28.561 slat (nsec): min=1629, max=8295.8k, avg=68178.85, stdev=424947.61 00:10:28.561 clat (usec): min=1248, max=25389, avg=9512.70, stdev=3280.85 00:10:28.561 lat (usec): min=1259, max=25392, avg=9580.88, stdev=3317.68 00:10:28.561 clat percentiles (usec): 00:10:28.561 | 1.00th=[ 3130], 5.00th=[ 5014], 10.00th=[ 6259], 20.00th=[ 7963], 00:10:28.561 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503], 00:10:28.561 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[13042], 95.00th=[16581], 00:10:28.561 | 99.00th=[22414], 99.50th=[22676], 99.90th=[25297], 99.95th=[25297], 00:10:28.561 | 99.99th=[25297] 00:10:28.561 bw ( KiB/s): min=24576, max=27576, per=23.64%, avg=26076.00, stdev=2121.32, samples=2 00:10:28.561 iops : min= 6144, max= 6894, avg=6519.00, stdev=530.33, samples=2 00:10:28.561 lat (msec) : 2=0.07%, 4=1.21%, 10=69.41%, 20=27.83%, 50=1.47% 00:10:28.561 cpu : usr=3.68%, sys=7.55%, ctx=631, majf=0, minf=1 00:10:28.561 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:28.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.561 issued rwts: total=6144,6646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.561 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.561 00:10:28.561 Run status group 0 (all jobs): 00:10:28.561 READ: bw=104MiB/s (109MB/s), 23.8MiB/s-30.5MiB/s (25.0MB/s-32.0MB/s), io=105MiB (110MB), run=1003-1007msec 00:10:28.561 WRITE: bw=108MiB/s (113MB/s), 24.3MiB/s-31.8MiB/s (25.5MB/s-33.4MB/s), io=108MiB (114MB), run=1003-1007msec 00:10:28.561 00:10:28.561 Disk stats (read/write): 00:10:28.561 nvme0n1: ios=6706/6711, merge=0/0, ticks=55828/46011, in_queue=101839, util=87.47% 00:10:28.561 nvme0n2: ios=5160/5327, merge=0/0, ticks=17476/19690, in_queue=37166, util=88.28% 00:10:28.561 nvme0n3: ios=5520/5632, merge=0/0, ticks=26674/24693, in_queue=51367, util=100.00% 00:10:28.561 nvme0n4: ios=5120/5283, merge=0/0, ticks=52004/49666, in_queue=101670, util=89.53% 00:10:28.561 18:09:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:28.561 [global] 00:10:28.561 thread=1 00:10:28.561 invalidate=1 00:10:28.561 rw=randwrite 00:10:28.561 time_based=1 00:10:28.561 runtime=1 00:10:28.561 ioengine=libaio 00:10:28.561 direct=1 00:10:28.561 bs=4096 00:10:28.561 iodepth=128 00:10:28.561 norandommap=0 00:10:28.561 numjobs=1 00:10:28.561 00:10:28.561 verify_dump=1 00:10:28.561 verify_backlog=512 00:10:28.561 verify_state_save=0 00:10:28.561 do_verify=1 00:10:28.561 verify=crc32c-intel 00:10:28.561 [job0] 00:10:28.561 filename=/dev/nvme0n1 00:10:28.561 [job1] 00:10:28.561 filename=/dev/nvme0n2 00:10:28.561 [job2] 00:10:28.561 filename=/dev/nvme0n3 00:10:28.561 [job3] 00:10:28.561 filename=/dev/nvme0n4 00:10:28.561 Could not set queue depth (nvme0n1) 00:10:28.561 Could not set queue depth (nvme0n2) 00:10:28.561 Could not set queue depth (nvme0n3) 00:10:28.561 Could not set queue depth (nvme0n4) 00:10:28.822 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.822 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.822 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.822 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.822 fio-3.35 00:10:28.822 Starting 4 threads 00:10:30.218 00:10:30.218 job0: (groupid=0, jobs=1): err= 0: pid=1850259: Tue Nov 19 18:09:31 2024 00:10:30.218 read: IOPS=8643, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1007msec) 00:10:30.218 slat (nsec): min=936, max=7154.0k, avg=59350.25, stdev=433471.74 00:10:30.218 clat (usec): min=2570, max=14525, avg=7695.52, stdev=1710.72 00:10:30.218 lat (usec): min=2607, max=14541, avg=7754.87, stdev=1736.22 00:10:30.218 clat percentiles (usec): 00:10:30.218 | 1.00th=[ 3785], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6521], 00:10:30.218 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 7635], 00:10:30.218 | 70.00th=[ 7963], 80.00th=[ 9110], 90.00th=[10159], 95.00th=[11207], 00:10:30.218 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13566], 99.95th=[13960], 00:10:30.218 | 99.99th=[14484] 00:10:30.218 write: IOPS=8895, BW=34.7MiB/s (36.4MB/s)(35.0MiB/1007msec); 0 zone resets 00:10:30.218 slat (nsec): min=1551, max=9594.9k, avg=47570.99, stdev=305145.11 00:10:30.218 clat (usec): min=1119, max=20368, avg=6784.17, stdev=2105.13 00:10:30.218 lat (usec): min=1128, max=20377, avg=6831.74, stdev=2121.50 00:10:30.218 clat percentiles (usec): 00:10:30.218 | 1.00th=[ 2704], 5.00th=[ 3785], 10.00th=[ 4293], 20.00th=[ 5342], 00:10:30.218 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7111], 00:10:30.218 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 8225], 95.00th=[ 9634], 00:10:30.218 | 99.00th=[15270], 99.50th=[19792], 99.90th=[20055], 99.95th=[20317], 00:10:30.218 | 99.99th=[20317] 00:10:30.218 bw ( KiB/s): min=35080, max=35568, per=34.77%, avg=35324.00, stdev=345.07, samples=2 00:10:30.218 iops : min= 8770, max= 8892, avg=8831.00, stdev=86.27, samples=2 00:10:30.218 lat (msec) : 2=0.12%, 4=4.05%, 10=88.37%, 20=7.25%, 50=0.20% 00:10:30.218 cpu : usr=5.47%, sys=8.95%, ctx=836, majf=0, minf=1 00:10:30.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:30.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.218 issued rwts: total=8704,8958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.218 job1: (groupid=0, jobs=1): err= 0: pid=1850271: Tue Nov 19 18:09:31 2024 00:10:30.218 read: IOPS=6587, BW=25.7MiB/s (27.0MB/s)(25.8MiB/1004msec) 00:10:30.218 slat (nsec): min=873, max=11967k, avg=80126.50, stdev=574997.93 00:10:30.218 clat (usec): min=2839, max=38187, avg=9918.53, stdev=4867.66 00:10:30.218 lat (usec): min=3342, max=38191, avg=9998.66, stdev=4914.56 00:10:30.218 clat percentiles (usec): 00:10:30.218 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7242], 00:10:30.218 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8291], 00:10:30.218 | 70.00th=[ 9634], 80.00th=[11338], 90.00th=[16319], 95.00th=[18482], 00:10:30.218 | 99.00th=[34341], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:10:30.218 | 99.99th=[38011] 00:10:30.218 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:10:30.218 slat (nsec): min=1489, max=9716.4k, avg=62889.62, stdev=437973.43 00:10:30.218 clat (usec): min=582, max=60448, avg=9290.69, stdev=7125.60 00:10:30.218 lat (usec): min=590, max=60450, avg=9353.57, stdev=7167.04 00:10:30.218 clat percentiles (usec): 00:10:30.218 | 1.00th=[ 2376], 5.00th=[ 4293], 10.00th=[ 5538], 20.00th=[ 6652], 00:10:30.218 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7635], 00:10:30.218 | 70.00th=[ 8455], 80.00th=[10028], 90.00th=[13960], 95.00th=[18744], 00:10:30.218 | 99.00th=[53216], 99.50th=[59507], 99.90th=[60556], 99.95th=[60556], 00:10:30.218 | 99.99th=[60556] 00:10:30.218 bw ( KiB/s): min=26056, max=27192, per=26.21%, avg=26624.00, stdev=803.27, samples=2 00:10:30.218 iops : min= 6514, max= 6798, avg=6656.00, stdev=200.82, samples=2 00:10:30.218 lat (usec) : 750=0.02%, 1000=0.03% 00:10:30.218 lat (msec) : 2=0.29%, 4=1.41%, 10=73.42%, 20=21.05%, 50=3.19% 00:10:30.218 lat (msec) : 100=0.59% 00:10:30.218 cpu : usr=4.29%, sys=6.38%, ctx=599, majf=0, minf=1 00:10:30.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:30.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.218 issued rwts: total=6614,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.218 job2: (groupid=0, jobs=1): err= 0: pid=1850283: Tue Nov 19 18:09:31 2024 00:10:30.218 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:10:30.218 slat (nsec): min=997, max=11380k, avg=89325.70, stdev=685586.09 00:10:30.218 clat (usec): min=3506, max=34280, avg=11694.13, stdev=4423.19 00:10:30.218 lat (usec): min=3514, max=34306, avg=11783.46, stdev=4476.01 00:10:30.218 clat percentiles (usec): 00:10:30.218 | 1.00th=[ 5538], 5.00th=[ 7439], 10.00th=[ 8029], 20.00th=[ 8356], 00:10:30.218 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[11469], 00:10:30.218 | 70.00th=[12518], 80.00th=[16188], 90.00th=[17433], 95.00th=[21627], 00:10:30.218 | 99.00th=[23462], 99.50th=[24773], 99.90th=[28967], 99.95th=[31065], 00:10:30.218 | 99.99th=[34341] 00:10:30.218 write: IOPS=5299, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1006msec); 0 zone resets 00:10:30.218 slat (nsec): min=1623, max=14040k, avg=96809.83, stdev=683398.84 00:10:30.218 clat (usec): min=1149, max=82462, avg=12703.80, stdev=11662.94 00:10:30.218 lat (usec): min=1160, max=82471, avg=12800.61, stdev=11746.84 00:10:30.218 clat percentiles (usec): 00:10:30.218 | 1.00th=[ 3392], 5.00th=[ 5211], 10.00th=[ 5538], 20.00th=[ 8094], 00:10:30.218 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[10159], 00:10:30.218 | 70.00th=[11994], 80.00th=[15139], 90.00th=[17957], 95.00th=[24249], 00:10:30.218 | 99.00th=[77071], 99.50th=[80217], 99.90th=[82314], 99.95th=[82314], 00:10:30.218 | 99.99th=[82314] 00:10:30.218 bw ( KiB/s): min=19696, max=22064, per=20.56%, avg=20880.00, stdev=1674.43, samples=2 00:10:30.219 iops : min= 4924, max= 5516, avg=5220.00, stdev=418.61, samples=2 00:10:30.219 lat (msec) : 2=0.03%, 4=1.16%, 10=53.87%, 20=38.33%, 50=5.09% 00:10:30.219 lat (msec) : 100=1.52% 00:10:30.219 cpu : usr=4.38%, sys=5.17%, ctx=394, majf=0, minf=1 00:10:30.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:30.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.219 issued rwts: total=5120,5331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.219 job3: (groupid=0, jobs=1): err= 0: pid=1850284: Tue Nov 19 18:09:31 2024 00:10:30.219 read: IOPS=4269, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1006msec) 00:10:30.219 slat (nsec): min=1348, max=11415k, avg=106379.34, stdev=828581.12 00:10:30.219 clat (usec): min=3810, max=57979, avg=14621.02, stdev=6196.89 00:10:30.219 lat (usec): min=3816, max=57986, avg=14727.40, stdev=6269.76 00:10:30.219 clat percentiles (usec): 00:10:30.219 | 1.00th=[ 7046], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 8848], 00:10:30.219 | 30.00th=[ 9765], 40.00th=[11994], 50.00th=[13698], 60.00th=[15795], 00:10:30.219 | 70.00th=[17171], 80.00th=[19268], 90.00th=[20579], 95.00th=[23200], 00:10:30.219 | 99.00th=[41681], 99.50th=[46400], 99.90th=[57934], 99.95th=[57934], 00:10:30.219 | 99.99th=[57934] 00:10:30.219 write: IOPS=4600, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1006msec); 0 zone resets 00:10:30.219 slat (nsec): min=1630, max=14308k, avg=73845.61, stdev=597747.28 00:10:30.219 clat (usec): min=1030, max=82149, avg=14029.38, stdev=12617.29 00:10:30.219 lat (usec): min=1039, max=82156, avg=14103.23, stdev=12673.56 00:10:30.219 clat percentiles (usec): 00:10:30.219 | 1.00th=[ 1975], 5.00th=[ 2769], 10.00th=[ 4490], 20.00th=[ 6718], 00:10:30.219 | 30.00th=[ 7963], 40.00th=[ 9765], 50.00th=[11076], 60.00th=[13173], 00:10:30.219 | 70.00th=[15270], 80.00th=[16909], 90.00th=[22938], 95.00th=[33817], 00:10:30.219 | 99.00th=[74974], 99.50th=[78119], 99.90th=[82314], 99.95th=[82314], 00:10:30.219 | 99.99th=[82314] 00:10:30.219 bw ( KiB/s): min=17720, max=19032, per=18.09%, avg=18376.00, stdev=927.72, samples=2 00:10:30.219 iops : min= 4430, max= 4758, avg=4594.00, stdev=231.93, samples=2 00:10:30.219 lat (msec) : 2=0.55%, 4=3.52%, 10=32.03%, 20=49.40%, 50=12.82% 00:10:30.219 lat (msec) : 100=1.68% 00:10:30.219 cpu : usr=3.48%, sys=5.57%, ctx=294, majf=0, minf=1 00:10:30.219 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:30.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.219 issued rwts: total=4295,4628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.219 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.219 00:10:30.219 Run status group 0 (all jobs): 00:10:30.219 READ: bw=95.9MiB/s (101MB/s), 16.7MiB/s-33.8MiB/s (17.5MB/s-35.4MB/s), io=96.6MiB (101MB), run=1004-1007msec 00:10:30.219 WRITE: bw=99.2MiB/s (104MB/s), 18.0MiB/s-34.7MiB/s (18.8MB/s-36.4MB/s), io=99.9MiB (105MB), run=1004-1007msec 00:10:30.219 00:10:30.219 Disk stats (read/write): 00:10:30.219 nvme0n1: ios=7218/7367, merge=0/0, ticks=52310/47920, in_queue=100230, util=87.47% 00:10:30.219 nvme0n2: ios=5278/5632, merge=0/0, ticks=39255/37531, in_queue=76786, util=87.93% 00:10:30.219 nvme0n3: ios=4096/4148, merge=0/0, ticks=47140/54541, in_queue=101681, util=88.25% 00:10:30.219 nvme0n4: ios=3681/4096, merge=0/0, ticks=50539/50424, in_queue=100963, util=89.39% 00:10:30.219 18:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:30.219 18:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1850594 00:10:30.219 18:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:30.219 18:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:30.219 [global] 00:10:30.219 thread=1 00:10:30.219 invalidate=1 00:10:30.219 rw=read 00:10:30.219 time_based=1 00:10:30.219 runtime=10 00:10:30.219 ioengine=libaio 00:10:30.219 direct=1 00:10:30.219 bs=4096 00:10:30.219 iodepth=1 00:10:30.219 norandommap=1 00:10:30.219 numjobs=1 00:10:30.219 00:10:30.219 [job0] 00:10:30.219 filename=/dev/nvme0n1 00:10:30.219 [job1] 00:10:30.219 filename=/dev/nvme0n2 00:10:30.219 [job2] 00:10:30.219 filename=/dev/nvme0n3 00:10:30.219 [job3] 00:10:30.219 filename=/dev/nvme0n4 00:10:30.219 Could not set queue depth (nvme0n1) 00:10:30.219 Could not set queue depth (nvme0n2) 00:10:30.219 Could not set queue depth (nvme0n3) 00:10:30.219 Could not set queue depth (nvme0n4) 00:10:30.479 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.479 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.479 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.479 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.479 fio-3.35 00:10:30.479 Starting 4 threads 00:10:33.775 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:33.776 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9244672, buflen=4096 00:10:33.776 fio: pid=1850805, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.776 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:33.776 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:10:33.776 fio: pid=1850798, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.776 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.776 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:33.776 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.776 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:33.776 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=286720, buflen=4096 00:10:33.776 fio: pid=1850778, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:33.776 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:33.776 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:33.776 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=311296, buflen=4096 00:10:33.776 fio: pid=1850782, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.037 00:10:34.037 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1850778: Tue Nov 19 18:09:35 2024 00:10:34.037 read: IOPS=24, BW=95.5KiB/s (97.8kB/s)(280KiB/2931msec) 00:10:34.037 slat (usec): min=25, max=25601, avg=508.74, stdev=3190.87 00:10:34.037 clat (usec): min=745, max=42040, avg=41035.73, stdev=4905.86 00:10:34.037 lat (usec): min=778, max=66922, avg=41551.36, stdev=5888.27 00:10:34.037 clat percentiles (usec): 00:10:34.037 | 1.00th=[ 750], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:34.037 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:34.037 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:34.037 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:34.037 | 99.99th=[42206] 00:10:34.037 bw ( KiB/s): min= 96, max= 104, per=3.07%, avg=97.60, stdev= 3.58, samples=5 00:10:34.037 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:34.037 lat (usec) : 750=1.41% 00:10:34.037 lat (msec) : 50=97.18% 00:10:34.037 cpu : usr=0.14%, sys=0.00%, ctx=73, majf=0, minf=2 00:10:34.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.037 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.037 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.037 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1850782: Tue Nov 19 18:09:35 2024 00:10:34.037 read: IOPS=24, BW=97.3KiB/s (99.7kB/s)(304KiB/3123msec) 00:10:34.037 slat (usec): min=10, max=216, avg=32.70, stdev=36.69 00:10:34.037 clat (usec): min=964, max=42078, avg=40764.99, stdev=6578.72 00:10:34.037 lat (usec): min=989, max=42104, avg=40797.79, stdev=6578.80 00:10:34.037 clat percentiles (usec): 00:10:34.037 | 1.00th=[ 963], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:10:34.037 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:34.037 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:34.037 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:34.037 | 99.99th=[42206] 00:10:34.037 bw ( KiB/s): min= 96, max= 104, per=3.07%, avg=97.67, stdev= 3.20, samples=6 00:10:34.037 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:10:34.037 lat (usec) : 1000=1.30% 00:10:34.037 lat (msec) : 2=1.30%, 50=96.10% 00:10:34.037 cpu : usr=0.10%, sys=0.00%, ctx=80, majf=0, minf=1 00:10:34.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.037 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.037 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.037 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1850798: Tue Nov 19 18:09:35 2024 00:10:34.037 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2729msec) 00:10:34.037 slat (nsec): min=8650, max=34887, avg=26359.22, stdev=2410.87 00:10:34.037 clat (usec): min=450, max=41404, avg=40370.52, stdev=4951.44 00:10:34.037 lat (usec): min=485, max=41412, avg=40396.88, stdev=4950.36 00:10:34.037 clat percentiles (usec): 00:10:34.037 | 1.00th=[ 449], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:34.037 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:34.037 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:34.037 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:34.037 | 99.99th=[41157] 00:10:34.037 bw ( KiB/s): min= 96, max= 104, per=3.13%, avg=99.20, stdev= 4.38, samples=5 00:10:34.037 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:34.037 lat (usec) : 500=1.47% 00:10:34.037 lat (msec) : 50=97.06% 00:10:34.037 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:10:34.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.037 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.037 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.037 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1850805: Tue Nov 19 18:09:35 2024 00:10:34.037 read: IOPS=883, BW=3532KiB/s (3617kB/s)(9028KiB/2556msec) 00:10:34.037 slat (nsec): min=6931, max=55881, avg=25612.75, stdev=1601.11 00:10:34.037 clat (usec): min=248, max=42020, avg=1088.91, stdev=2276.27 00:10:34.037 lat (usec): min=256, max=42046, avg=1114.52, stdev=2276.29 00:10:34.037 clat percentiles (usec): 00:10:34.037 | 1.00th=[ 709], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 930], 00:10:34.037 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:10:34.037 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1045], 00:10:34.037 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[42206], 99.95th=[42206], 00:10:34.037 | 99.99th=[42206] 00:10:34.037 bw ( KiB/s): min= 1712, max= 4128, per=100.00%, avg=3569.60, stdev=1039.87, samples=5 00:10:34.037 iops : min= 428, max= 1032, avg=892.40, stdev=259.97, samples=5 00:10:34.037 lat (usec) : 250=0.04%, 500=0.09%, 750=1.59%, 1000=75.64% 00:10:34.037 lat (msec) : 2=22.28%, 50=0.31% 00:10:34.037 cpu : usr=1.33%, sys=2.31%, ctx=2258, majf=0, minf=2 00:10:34.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.037 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.037 issued rwts: total=2258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.037 00:10:34.037 Run status group 0 (all jobs): 00:10:34.037 READ: bw=3164KiB/s (3240kB/s), 95.5KiB/s-3532KiB/s (97.8kB/s-3617kB/s), io=9880KiB (10.1MB), run=2556-3123msec 00:10:34.037 00:10:34.037 Disk stats (read/write): 00:10:34.037 nvme0n1: ios=67/0, merge=0/0, ticks=2750/0, in_queue=2750, util=92.09% 00:10:34.037 nvme0n2: ios=74/0, merge=0/0, ticks=3018/0, in_queue=3018, util=94.40% 00:10:34.037 nvme0n3: ios=62/0, merge=0/0, ticks=2502/0, in_queue=2502, util=95.55% 00:10:34.037 nvme0n4: ios=2257/0, merge=0/0, ticks=2512/0, in_queue=2512, util=96.35% 00:10:34.037 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.037 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:34.298 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.298 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:34.558 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.558 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:34.558 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.558 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1850594 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:34.818 nvmf hotplug test: fio failed as expected 00:10:34.818 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.078 rmmod nvme_tcp 00:10:35.078 rmmod nvme_fabrics 00:10:35.078 rmmod nvme_keyring 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:35.078 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1847075 ']' 00:10:35.079 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1847075 00:10:35.079 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1847075 ']' 00:10:35.079 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1847075 00:10:35.079 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:35.079 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.079 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1847075 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1847075' 00:10:35.339 killing process with pid 1847075 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1847075 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1847075 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.339 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.883 00:10:37.883 real 0m29.358s 00:10:37.883 user 2m32.954s 00:10:37.883 sys 0m9.273s 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.883 ************************************ 00:10:37.883 END TEST nvmf_fio_target 00:10:37.883 ************************************ 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.883 ************************************ 00:10:37.883 START TEST nvmf_bdevio 00:10:37.883 ************************************ 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:37.883 * Looking for test storage... 00:10:37.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.883 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.883 --rc genhtml_branch_coverage=1 00:10:37.883 --rc genhtml_function_coverage=1 00:10:37.883 --rc genhtml_legend=1 00:10:37.883 --rc geninfo_all_blocks=1 00:10:37.883 --rc geninfo_unexecuted_blocks=1 00:10:37.883 00:10:37.883 ' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.883 --rc genhtml_branch_coverage=1 00:10:37.883 --rc genhtml_function_coverage=1 00:10:37.883 --rc genhtml_legend=1 00:10:37.883 --rc geninfo_all_blocks=1 00:10:37.883 --rc geninfo_unexecuted_blocks=1 00:10:37.883 00:10:37.883 ' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.883 --rc genhtml_branch_coverage=1 00:10:37.883 --rc genhtml_function_coverage=1 00:10:37.883 --rc genhtml_legend=1 00:10:37.883 --rc geninfo_all_blocks=1 00:10:37.883 --rc geninfo_unexecuted_blocks=1 00:10:37.883 00:10:37.883 ' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.883 --rc genhtml_branch_coverage=1 00:10:37.883 --rc genhtml_function_coverage=1 00:10:37.883 --rc genhtml_legend=1 00:10:37.883 --rc geninfo_all_blocks=1 00:10:37.883 --rc geninfo_unexecuted_blocks=1 00:10:37.883 00:10:37.883 ' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.883 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.027 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:46.028 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:46.028 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:46.028 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:46.028 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:10:46.028 00:10:46.028 --- 10.0.0.2 ping statistics --- 00:10:46.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.028 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:10:46.028 00:10:46.028 --- 10.0.0.1 ping statistics --- 00:10:46.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.028 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1856020 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1856020 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1856020 ']' 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.028 18:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.028 [2024-11-19 18:09:46.642103] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:10:46.028 [2024-11-19 18:09:46.642208] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.028 [2024-11-19 18:09:46.758501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.028 [2024-11-19 18:09:46.811107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.028 [2024-11-19 18:09:46.811170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.028 [2024-11-19 18:09:46.811179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.028 [2024-11-19 18:09:46.811186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.029 [2024-11-19 18:09:46.811193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.029 [2024-11-19 18:09:46.813233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:46.029 [2024-11-19 18:09:46.813458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:46.029 [2024-11-19 18:09:46.813617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:46.029 [2024-11-19 18:09:46.813619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.029 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.029 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:46.029 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.029 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.029 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.290 [2024-11-19 18:09:47.517090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.290 Malloc0 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.290 [2024-11-19 18:09:47.593156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.290 { 00:10:46.290 "params": { 00:10:46.290 "name": "Nvme$subsystem", 00:10:46.290 "trtype": "$TEST_TRANSPORT", 00:10:46.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.290 "adrfam": "ipv4", 00:10:46.290 "trsvcid": "$NVMF_PORT", 00:10:46.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.290 "hdgst": ${hdgst:-false}, 00:10:46.290 "ddgst": ${ddgst:-false} 00:10:46.290 }, 00:10:46.290 "method": "bdev_nvme_attach_controller" 00:10:46.290 } 00:10:46.290 EOF 00:10:46.290 )") 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:46.290 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.291 "params": { 00:10:46.291 "name": "Nvme1", 00:10:46.291 "trtype": "tcp", 00:10:46.291 "traddr": "10.0.0.2", 00:10:46.291 "adrfam": "ipv4", 00:10:46.291 "trsvcid": "4420", 00:10:46.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.291 "hdgst": false, 00:10:46.291 "ddgst": false 00:10:46.291 }, 00:10:46.291 "method": "bdev_nvme_attach_controller" 00:10:46.291 }' 00:10:46.291 [2024-11-19 18:09:47.650791] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:10:46.291 [2024-11-19 18:09:47.650857] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856170 ] 00:10:46.291 [2024-11-19 18:09:47.744217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:46.552 [2024-11-19 18:09:47.800547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.552 [2024-11-19 18:09:47.800710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.552 [2024-11-19 18:09:47.800710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.552 I/O targets: 00:10:46.552 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:46.552 00:10:46.552 00:10:46.552 CUnit - A unit testing framework for C - Version 2.1-3 00:10:46.552 http://cunit.sourceforge.net/ 00:10:46.552 00:10:46.552 00:10:46.552 Suite: bdevio tests on: Nvme1n1 00:10:46.552 Test: blockdev write read block ...passed 00:10:46.814 Test: blockdev write zeroes read block ...passed 00:10:46.814 Test: blockdev write zeroes read no split ...passed 00:10:46.814 Test: blockdev write zeroes read split ...passed 00:10:46.814 Test: blockdev write zeroes read split partial ...passed 00:10:46.814 Test: blockdev reset ...[2024-11-19 18:09:48.139510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:46.814 [2024-11-19 18:09:48.139611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863970 (9): Bad file descriptor 00:10:46.814 [2024-11-19 18:09:48.193487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:46.814 passed 00:10:46.814 Test: blockdev write read 8 blocks ...passed 00:10:46.814 Test: blockdev write read size > 128k ...passed 00:10:46.814 Test: blockdev write read invalid size ...passed 00:10:46.814 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:46.814 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:46.814 Test: blockdev write read max offset ...passed 00:10:47.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:47.075 Test: blockdev writev readv 8 blocks ...passed 00:10:47.075 Test: blockdev writev readv 30 x 1block ...passed 00:10:47.075 Test: blockdev writev readv block ...passed 00:10:47.075 Test: blockdev writev readv size > 128k ...passed 00:10:47.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:47.075 Test: blockdev comparev and writev ...[2024-11-19 18:09:48.460434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.075 [2024-11-19 18:09:48.460487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:47.075 [2024-11-19 18:09:48.460504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.075 [2024-11-19 18:09:48.460513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:47.075 [2024-11-19 18:09:48.461023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.075 [2024-11-19 18:09:48.461039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:47.075 [2024-11-19 18:09:48.461053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.075 [2024-11-19 18:09:48.461063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:47.075 [2024-11-19 18:09:48.461591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.075 [2024-11-19 18:09:48.461606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:47.075 [2024-11-19 18:09:48.461620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.075 [2024-11-19 18:09:48.461628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:47.075 [2024-11-19 18:09:48.462192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.075 [2024-11-19 18:09:48.462207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:47.075 [2024-11-19 18:09:48.462221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.075 [2024-11-19 18:09:48.462237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:47.075 passed 00:10:47.336 Test: blockdev nvme passthru rw ...passed 00:10:47.336 Test: blockdev nvme passthru vendor specific ...[2024-11-19 18:09:48.547006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:47.336 [2024-11-19 18:09:48.547024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:47.336 [2024-11-19 18:09:48.547409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:47.336 [2024-11-19 18:09:48.547424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:47.336 [2024-11-19 18:09:48.547831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:47.336 [2024-11-19 18:09:48.547844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:47.336 [2024-11-19 18:09:48.548225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:47.336 [2024-11-19 18:09:48.548239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:47.336 passed 00:10:47.336 Test: blockdev nvme admin passthru ...passed 00:10:47.336 Test: blockdev copy ...passed 00:10:47.336 00:10:47.336 Run Summary: Type Total Ran Passed Failed Inactive 00:10:47.336 suites 1 1 n/a 0 0 00:10:47.336 tests 23 23 23 0 0 00:10:47.336 asserts 152 152 152 0 n/a 00:10:47.336 00:10:47.336 Elapsed time = 1.278 seconds 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.336 rmmod nvme_tcp 00:10:47.336 rmmod nvme_fabrics 00:10:47.336 rmmod nvme_keyring 00:10:47.336 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1856020 ']' 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1856020 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1856020 ']' 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1856020 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1856020 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1856020' 00:10:47.597 killing process with pid 1856020 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1856020 00:10:47.597 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1856020 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.597 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.598 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.598 18:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.145 00:10:50.145 real 0m12.267s 00:10:50.145 user 0m13.206s 00:10:50.145 sys 0m6.304s 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.145 ************************************ 00:10:50.145 END TEST nvmf_bdevio 00:10:50.145 ************************************ 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:50.145 00:10:50.145 real 5m4.199s 00:10:50.145 user 11m39.462s 00:10:50.145 sys 1m51.383s 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.145 ************************************ 00:10:50.145 END TEST nvmf_target_core 00:10:50.145 ************************************ 00:10:50.145 18:09:51 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:50.145 18:09:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.145 18:09:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.145 18:09:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.145 ************************************ 00:10:50.145 START TEST nvmf_target_extra 00:10:50.145 ************************************ 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:50.145 * Looking for test storage... 00:10:50.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.145 --rc genhtml_branch_coverage=1 00:10:50.145 --rc genhtml_function_coverage=1 00:10:50.145 --rc genhtml_legend=1 00:10:50.145 --rc geninfo_all_blocks=1 00:10:50.145 --rc geninfo_unexecuted_blocks=1 00:10:50.145 00:10:50.145 ' 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.145 --rc genhtml_branch_coverage=1 00:10:50.145 --rc genhtml_function_coverage=1 00:10:50.145 --rc genhtml_legend=1 00:10:50.145 --rc geninfo_all_blocks=1 00:10:50.145 --rc geninfo_unexecuted_blocks=1 00:10:50.145 00:10:50.145 ' 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.145 --rc genhtml_branch_coverage=1 00:10:50.145 --rc genhtml_function_coverage=1 00:10:50.145 --rc genhtml_legend=1 00:10:50.145 --rc geninfo_all_blocks=1 00:10:50.145 --rc geninfo_unexecuted_blocks=1 00:10:50.145 00:10:50.145 ' 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.145 --rc genhtml_branch_coverage=1 00:10:50.145 --rc genhtml_function_coverage=1 00:10:50.145 --rc genhtml_legend=1 00:10:50.145 --rc geninfo_all_blocks=1 00:10:50.145 --rc geninfo_unexecuted_blocks=1 00:10:50.145 00:10:50.145 ' 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.145 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:50.146 ************************************ 00:10:50.146 START TEST nvmf_example 00:10:50.146 ************************************ 00:10:50.146 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:50.409 * Looking for test storage... 00:10:50.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.409 --rc genhtml_branch_coverage=1 00:10:50.409 --rc genhtml_function_coverage=1 00:10:50.409 --rc genhtml_legend=1 00:10:50.409 --rc geninfo_all_blocks=1 00:10:50.409 --rc geninfo_unexecuted_blocks=1 00:10:50.409 00:10:50.409 ' 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.409 --rc genhtml_branch_coverage=1 00:10:50.409 --rc genhtml_function_coverage=1 00:10:50.409 --rc genhtml_legend=1 00:10:50.409 --rc geninfo_all_blocks=1 00:10:50.409 --rc geninfo_unexecuted_blocks=1 00:10:50.409 00:10:50.409 ' 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.409 --rc genhtml_branch_coverage=1 00:10:50.409 --rc genhtml_function_coverage=1 00:10:50.409 --rc genhtml_legend=1 00:10:50.409 --rc geninfo_all_blocks=1 00:10:50.409 --rc geninfo_unexecuted_blocks=1 00:10:50.409 00:10:50.409 ' 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.409 --rc genhtml_branch_coverage=1 00:10:50.409 --rc genhtml_function_coverage=1 00:10:50.409 --rc genhtml_legend=1 00:10:50.409 --rc geninfo_all_blocks=1 00:10:50.409 --rc geninfo_unexecuted_blocks=1 00:10:50.409 00:10:50.409 ' 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.409 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.410 18:09:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.553 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:58.554 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:58.554 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:58.554 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:58.554 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.554 18:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:10:58.554 00:10:58.554 --- 10.0.0.2 ping statistics --- 00:10:58.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.554 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:10:58.554 00:10:58.554 --- 10.0.0.1 ping statistics --- 00:10:58.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.554 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1860875 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1860875 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1860875 ']' 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.554 18:09:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.816 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.817 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.078 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.078 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:59.078 18:10:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:11.311 Initializing NVMe Controllers 00:11:11.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:11.311 Initialization complete. Launching workers. 00:11:11.311 ======================================================== 00:11:11.311 Latency(us) 00:11:11.311 Device Information : IOPS MiB/s Average min max 00:11:11.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18859.80 73.67 3393.09 594.77 15458.74 00:11:11.311 ======================================================== 00:11:11.311 Total : 18859.80 73.67 3393.09 594.77 15458.74 00:11:11.311 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.311 rmmod nvme_tcp 00:11:11.311 rmmod nvme_fabrics 00:11:11.311 rmmod nvme_keyring 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1860875 ']' 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1860875 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1860875 ']' 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1860875 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1860875 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1860875' 00:11:11.311 killing process with pid 1860875 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1860875 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1860875 00:11:11.311 nvmf threads initialize successfully 00:11:11.311 bdev subsystem init successfully 00:11:11.311 created a nvmf target service 00:11:11.311 create targets's poll groups done 00:11:11.311 all subsystems of target started 00:11:11.311 nvmf target is running 00:11:11.311 all subsystems of target stopped 00:11:11.311 destroy targets's poll groups done 00:11:11.311 destroyed the nvmf target service 00:11:11.311 bdev subsystem finish successfully 00:11:11.311 nvmf threads destroy successfully 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.311 18:10:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.573 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.573 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:11.573 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.573 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.573 00:11:11.573 real 0m21.421s 00:11:11.573 user 0m46.794s 00:11:11.573 sys 0m6.991s 00:11:11.573 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.573 18:10:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.573 ************************************ 00:11:11.573 END TEST nvmf_example 00:11:11.573 ************************************ 00:11:11.573 18:10:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:11.573 18:10:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.573 18:10:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.573 18:10:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.835 ************************************ 00:11:11.835 START TEST nvmf_filesystem 00:11:11.835 ************************************ 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:11.835 * Looking for test storage... 00:11:11.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.835 --rc genhtml_branch_coverage=1 00:11:11.835 --rc genhtml_function_coverage=1 00:11:11.835 --rc genhtml_legend=1 00:11:11.835 --rc geninfo_all_blocks=1 00:11:11.835 --rc geninfo_unexecuted_blocks=1 00:11:11.835 00:11:11.835 ' 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.835 --rc genhtml_branch_coverage=1 00:11:11.835 --rc genhtml_function_coverage=1 00:11:11.835 --rc genhtml_legend=1 00:11:11.835 --rc geninfo_all_blocks=1 00:11:11.835 --rc geninfo_unexecuted_blocks=1 00:11:11.835 00:11:11.835 ' 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.835 --rc genhtml_branch_coverage=1 00:11:11.835 --rc genhtml_function_coverage=1 00:11:11.835 --rc genhtml_legend=1 00:11:11.835 --rc geninfo_all_blocks=1 00:11:11.835 --rc geninfo_unexecuted_blocks=1 00:11:11.835 00:11:11.835 ' 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:11.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.835 --rc genhtml_branch_coverage=1 00:11:11.835 --rc genhtml_function_coverage=1 00:11:11.835 --rc genhtml_legend=1 00:11:11.835 --rc geninfo_all_blocks=1 00:11:11.835 --rc geninfo_unexecuted_blocks=1 00:11:11.835 00:11:11.835 ' 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:11.835 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:11.836 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:11.836 #define SPDK_CONFIG_H 00:11:11.836 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:11.836 #define SPDK_CONFIG_APPS 1 00:11:11.836 #define SPDK_CONFIG_ARCH native 00:11:11.836 #undef SPDK_CONFIG_ASAN 00:11:11.836 #undef SPDK_CONFIG_AVAHI 00:11:11.836 #undef SPDK_CONFIG_CET 00:11:11.836 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:11.836 #define SPDK_CONFIG_COVERAGE 1 00:11:11.836 #define SPDK_CONFIG_CROSS_PREFIX 00:11:11.836 #undef SPDK_CONFIG_CRYPTO 00:11:11.836 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:11.837 #undef SPDK_CONFIG_CUSTOMOCF 00:11:11.837 #undef SPDK_CONFIG_DAOS 00:11:11.837 #define SPDK_CONFIG_DAOS_DIR 00:11:11.837 #define SPDK_CONFIG_DEBUG 1 00:11:11.837 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:11.837 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:11.837 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:11.837 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:11.837 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:11.837 #undef SPDK_CONFIG_DPDK_UADK 00:11:11.837 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:11.837 #define SPDK_CONFIG_EXAMPLES 1 00:11:11.837 #undef SPDK_CONFIG_FC 00:11:11.837 #define SPDK_CONFIG_FC_PATH 00:11:11.837 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:11.837 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:11.837 #define SPDK_CONFIG_FSDEV 1 00:11:11.837 #undef SPDK_CONFIG_FUSE 00:11:11.837 #undef SPDK_CONFIG_FUZZER 00:11:11.837 #define SPDK_CONFIG_FUZZER_LIB 00:11:11.837 #undef SPDK_CONFIG_GOLANG 00:11:11.837 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:11.837 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:11.837 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:11.837 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:11.837 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:11.837 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:11.837 #undef SPDK_CONFIG_HAVE_LZ4 00:11:11.837 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:11.837 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:11.837 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:11.837 #define SPDK_CONFIG_IDXD 1 00:11:11.837 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:11.837 #undef SPDK_CONFIG_IPSEC_MB 00:11:11.837 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:11.837 #define SPDK_CONFIG_ISAL 1 00:11:11.837 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:11.837 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:11.837 #define SPDK_CONFIG_LIBDIR 00:11:11.837 #undef SPDK_CONFIG_LTO 00:11:11.837 #define SPDK_CONFIG_MAX_LCORES 128 00:11:11.837 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:11.837 #define SPDK_CONFIG_NVME_CUSE 1 00:11:11.837 #undef SPDK_CONFIG_OCF 00:11:11.837 #define SPDK_CONFIG_OCF_PATH 00:11:11.837 #define SPDK_CONFIG_OPENSSL_PATH 00:11:11.837 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:11.837 #define SPDK_CONFIG_PGO_DIR 00:11:11.837 #undef SPDK_CONFIG_PGO_USE 00:11:11.837 #define SPDK_CONFIG_PREFIX /usr/local 00:11:11.837 #undef SPDK_CONFIG_RAID5F 00:11:11.837 #undef SPDK_CONFIG_RBD 00:11:11.837 #define SPDK_CONFIG_RDMA 1 00:11:11.837 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:11.837 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:11.837 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:11.837 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:11.837 #define SPDK_CONFIG_SHARED 1 00:11:11.837 #undef SPDK_CONFIG_SMA 00:11:11.837 #define SPDK_CONFIG_TESTS 1 00:11:11.837 #undef SPDK_CONFIG_TSAN 00:11:11.837 #define SPDK_CONFIG_UBLK 1 00:11:11.837 #define SPDK_CONFIG_UBSAN 1 00:11:11.837 #undef SPDK_CONFIG_UNIT_TESTS 00:11:11.837 #undef SPDK_CONFIG_URING 00:11:11.837 #define SPDK_CONFIG_URING_PATH 00:11:11.837 #undef SPDK_CONFIG_URING_ZNS 00:11:11.837 #undef SPDK_CONFIG_USDT 00:11:11.837 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:11.837 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:11.837 #define SPDK_CONFIG_VFIO_USER 1 00:11:11.837 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:11.837 #define SPDK_CONFIG_VHOST 1 00:11:11.837 #define SPDK_CONFIG_VIRTIO 1 00:11:11.837 #undef SPDK_CONFIG_VTUNE 00:11:11.837 #define SPDK_CONFIG_VTUNE_DIR 00:11:11.837 #define SPDK_CONFIG_WERROR 1 00:11:11.837 #define SPDK_CONFIG_WPDK_DIR 00:11:11.837 #undef SPDK_CONFIG_XNVME 00:11:11.837 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:11.837 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:12.102 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:12.103 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:12.104 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1863675 ]] 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1863675 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.vNUPbx 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vNUPbx/tests/target /tmp/spdk.vNUPbx 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=119111065600 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10245443584 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847943168 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23359488 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64678031360 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=225280 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:12.105 * Looking for test storage... 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=119111065600 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=12460036096 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:12.105 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:12.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.106 --rc genhtml_branch_coverage=1 00:11:12.106 --rc genhtml_function_coverage=1 00:11:12.106 --rc genhtml_legend=1 00:11:12.106 --rc geninfo_all_blocks=1 00:11:12.106 --rc geninfo_unexecuted_blocks=1 00:11:12.106 00:11:12.106 ' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:12.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.106 --rc genhtml_branch_coverage=1 00:11:12.106 --rc genhtml_function_coverage=1 00:11:12.106 --rc genhtml_legend=1 00:11:12.106 --rc geninfo_all_blocks=1 00:11:12.106 --rc geninfo_unexecuted_blocks=1 00:11:12.106 00:11:12.106 ' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:12.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.106 --rc genhtml_branch_coverage=1 00:11:12.106 --rc genhtml_function_coverage=1 00:11:12.106 --rc genhtml_legend=1 00:11:12.106 --rc geninfo_all_blocks=1 00:11:12.106 --rc geninfo_unexecuted_blocks=1 00:11:12.106 00:11:12.106 ' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:12.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.106 --rc genhtml_branch_coverage=1 00:11:12.106 --rc genhtml_function_coverage=1 00:11:12.106 --rc genhtml_legend=1 00:11:12.106 --rc geninfo_all_blocks=1 00:11:12.106 --rc geninfo_unexecuted_blocks=1 00:11:12.106 00:11:12.106 ' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.106 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.107 18:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:20.251 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:20.252 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:20.252 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:20.252 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:20.252 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:20.252 18:10:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:20.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:11:20.252 00:11:20.252 --- 10.0.0.2 ping statistics --- 00:11:20.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.252 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:11:20.252 00:11:20.252 --- 10.0.0.1 ping statistics --- 00:11:20.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.252 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.252 ************************************ 00:11:20.252 START TEST nvmf_filesystem_no_in_capsule 00:11:20.252 ************************************ 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1867318 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1867318 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1867318 ']' 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.252 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.253 18:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.253 [2024-11-19 18:10:21.233088] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:11:20.253 [2024-11-19 18:10:21.233149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.253 [2024-11-19 18:10:21.332324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.253 [2024-11-19 18:10:21.386524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.253 [2024-11-19 18:10:21.386581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.253 [2024-11-19 18:10:21.386590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.253 [2024-11-19 18:10:21.386598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.253 [2024-11-19 18:10:21.386604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.253 [2024-11-19 18:10:21.388674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.253 [2024-11-19 18:10:21.388820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.253 [2024-11-19 18:10:21.388981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.253 [2024-11-19 18:10:21.388982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.825 [2024-11-19 18:10:22.113223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.825 Malloc1 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.825 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.826 [2024-11-19 18:10:22.268903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.826 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.826 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:20.826 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:20.826 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:20.826 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:20.826 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:20.826 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:20.826 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.826 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:21.088 { 00:11:21.088 "name": "Malloc1", 00:11:21.088 "aliases": [ 00:11:21.088 "af0f23e3-37b6-4d83-a578-aa045374e411" 00:11:21.088 ], 00:11:21.088 "product_name": "Malloc disk", 00:11:21.088 "block_size": 512, 00:11:21.088 "num_blocks": 1048576, 00:11:21.088 "uuid": "af0f23e3-37b6-4d83-a578-aa045374e411", 00:11:21.088 "assigned_rate_limits": { 00:11:21.088 "rw_ios_per_sec": 0, 00:11:21.088 "rw_mbytes_per_sec": 0, 00:11:21.088 "r_mbytes_per_sec": 0, 00:11:21.088 "w_mbytes_per_sec": 0 00:11:21.088 }, 00:11:21.088 "claimed": true, 00:11:21.088 "claim_type": "exclusive_write", 00:11:21.088 "zoned": false, 00:11:21.088 "supported_io_types": { 00:11:21.088 "read": true, 00:11:21.088 "write": true, 00:11:21.088 "unmap": true, 00:11:21.088 "flush": true, 00:11:21.088 "reset": true, 00:11:21.088 "nvme_admin": false, 00:11:21.088 "nvme_io": false, 00:11:21.088 "nvme_io_md": false, 00:11:21.088 "write_zeroes": true, 00:11:21.088 "zcopy": true, 00:11:21.088 "get_zone_info": false, 00:11:21.088 "zone_management": false, 00:11:21.088 "zone_append": false, 00:11:21.088 "compare": false, 00:11:21.088 "compare_and_write": false, 00:11:21.088 "abort": true, 00:11:21.088 "seek_hole": false, 00:11:21.088 "seek_data": false, 00:11:21.088 "copy": true, 00:11:21.088 "nvme_iov_md": false 00:11:21.088 }, 00:11:21.088 "memory_domains": [ 00:11:21.088 { 00:11:21.088 "dma_device_id": "system", 00:11:21.088 "dma_device_type": 1 00:11:21.088 }, 00:11:21.088 { 00:11:21.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.088 "dma_device_type": 2 00:11:21.088 } 00:11:21.088 ], 00:11:21.088 "driver_specific": {} 00:11:21.088 } 00:11:21.088 ]' 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:21.088 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.006 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.006 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:23.006 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.006 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:23.006 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:24.922 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:24.923 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:25.865 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:25.866 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:25.866 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.866 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.866 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.127 ************************************ 00:11:26.127 START TEST filesystem_ext4 00:11:26.127 ************************************ 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:26.127 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:26.127 mke2fs 1.47.0 (5-Feb-2023) 00:11:26.127 Discarding device blocks: 0/522240 done 00:11:26.127 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:26.127 Filesystem UUID: f430bfd5-c371-471c-aa5c-62312dd38efb 00:11:26.127 Superblock backups stored on blocks: 00:11:26.127 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:26.127 00:11:26.127 Allocating group tables: 0/64 done 00:11:26.127 Writing inode tables: 0/64 done 00:11:26.387 Creating journal (8192 blocks): done 00:11:28.712 Writing superblocks and filesystem accounting information: 0/64 done 00:11:28.712 00:11:28.712 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:28.712 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1867318 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.002 00:11:34.002 real 0m8.080s 00:11:34.002 user 0m0.029s 00:11:34.002 sys 0m0.082s 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.002 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:34.002 ************************************ 00:11:34.002 END TEST filesystem_ext4 00:11:34.002 ************************************ 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.264 ************************************ 00:11:34.264 START TEST filesystem_btrfs 00:11:34.264 ************************************ 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:34.264 btrfs-progs v6.8.1 00:11:34.264 See https://btrfs.readthedocs.io for more information. 00:11:34.264 00:11:34.264 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:34.264 NOTE: several default settings have changed in version 5.15, please make sure 00:11:34.264 this does not affect your deployments: 00:11:34.264 - DUP for metadata (-m dup) 00:11:34.264 - enabled no-holes (-O no-holes) 00:11:34.264 - enabled free-space-tree (-R free-space-tree) 00:11:34.264 00:11:34.264 Label: (null) 00:11:34.264 UUID: 419bf39b-a9c4-4931-998e-a9186eaffef1 00:11:34.264 Node size: 16384 00:11:34.264 Sector size: 4096 (CPU page size: 4096) 00:11:34.264 Filesystem size: 510.00MiB 00:11:34.264 Block group profiles: 00:11:34.264 Data: single 8.00MiB 00:11:34.264 Metadata: DUP 32.00MiB 00:11:34.264 System: DUP 8.00MiB 00:11:34.264 SSD detected: yes 00:11:34.264 Zoned device: no 00:11:34.264 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:34.264 Checksum: crc32c 00:11:34.264 Number of devices: 1 00:11:34.264 Devices: 00:11:34.264 ID SIZE PATH 00:11:34.264 1 510.00MiB /dev/nvme0n1p1 00:11:34.264 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:34.264 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1867318 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:35.650 00:11:35.650 real 0m1.250s 00:11:35.650 user 0m0.031s 00:11:35.650 sys 0m0.116s 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:35.650 ************************************ 00:11:35.650 END TEST filesystem_btrfs 00:11:35.650 ************************************ 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.650 ************************************ 00:11:35.650 START TEST filesystem_xfs 00:11:35.650 ************************************ 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:35.650 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:35.650 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:35.650 = sectsz=512 attr=2, projid32bit=1 00:11:35.650 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:35.650 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:35.650 data = bsize=4096 blocks=130560, imaxpct=25 00:11:35.650 = sunit=0 swidth=0 blks 00:11:35.650 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:35.650 log =internal log bsize=4096 blocks=16384, version=2 00:11:35.650 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:35.650 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:36.594 Discarding blocks...Done. 00:11:36.594 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:36.594 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1867318 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.506 00:11:38.506 real 0m2.974s 00:11:38.506 user 0m0.026s 00:11:38.506 sys 0m0.079s 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:38.506 ************************************ 00:11:38.506 END TEST filesystem_xfs 00:11:38.506 ************************************ 00:11:38.506 18:10:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:38.767 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:38.767 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1867318 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1867318 ']' 00:11:39.029 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1867318 00:11:39.030 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:39.030 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.030 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1867318 00:11:39.030 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.030 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.030 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1867318' 00:11:39.030 killing process with pid 1867318 00:11:39.030 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1867318 00:11:39.030 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1867318 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:39.292 00:11:39.292 real 0m19.392s 00:11:39.292 user 1m16.564s 00:11:39.292 sys 0m1.497s 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.292 ************************************ 00:11:39.292 END TEST nvmf_filesystem_no_in_capsule 00:11:39.292 ************************************ 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.292 ************************************ 00:11:39.292 START TEST nvmf_filesystem_in_capsule 00:11:39.292 ************************************ 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1871377 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1871377 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1871377 ']' 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.292 18:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.292 [2024-11-19 18:10:40.707519] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:11:39.292 [2024-11-19 18:10:40.707569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.554 [2024-11-19 18:10:40.796888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.554 [2024-11-19 18:10:40.830511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.554 [2024-11-19 18:10:40.830540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.554 [2024-11-19 18:10:40.830546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.554 [2024-11-19 18:10:40.830551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.554 [2024-11-19 18:10:40.830555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.554 [2024-11-19 18:10:40.831898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.554 [2024-11-19 18:10:40.832050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.554 [2024-11-19 18:10:40.832418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.554 [2024-11-19 18:10:40.832490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.126 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.126 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:40.126 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.126 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.126 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.126 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.126 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:40.126 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:40.127 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.127 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.127 [2024-11-19 18:10:41.551273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.127 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.127 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:40.127 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.127 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.389 Malloc1 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.389 [2024-11-19 18:10:41.672470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:40.389 { 00:11:40.389 "name": "Malloc1", 00:11:40.389 "aliases": [ 00:11:40.389 "19b4b064-4916-4764-b3dc-a15defaeea0f" 00:11:40.389 ], 00:11:40.389 "product_name": "Malloc disk", 00:11:40.389 "block_size": 512, 00:11:40.389 "num_blocks": 1048576, 00:11:40.389 "uuid": "19b4b064-4916-4764-b3dc-a15defaeea0f", 00:11:40.389 "assigned_rate_limits": { 00:11:40.389 "rw_ios_per_sec": 0, 00:11:40.389 "rw_mbytes_per_sec": 0, 00:11:40.389 "r_mbytes_per_sec": 0, 00:11:40.389 "w_mbytes_per_sec": 0 00:11:40.389 }, 00:11:40.389 "claimed": true, 00:11:40.389 "claim_type": "exclusive_write", 00:11:40.389 "zoned": false, 00:11:40.389 "supported_io_types": { 00:11:40.389 "read": true, 00:11:40.389 "write": true, 00:11:40.389 "unmap": true, 00:11:40.389 "flush": true, 00:11:40.389 "reset": true, 00:11:40.389 "nvme_admin": false, 00:11:40.389 "nvme_io": false, 00:11:40.389 "nvme_io_md": false, 00:11:40.389 "write_zeroes": true, 00:11:40.389 "zcopy": true, 00:11:40.389 "get_zone_info": false, 00:11:40.389 "zone_management": false, 00:11:40.389 "zone_append": false, 00:11:40.389 "compare": false, 00:11:40.389 "compare_and_write": false, 00:11:40.389 "abort": true, 00:11:40.389 "seek_hole": false, 00:11:40.389 "seek_data": false, 00:11:40.389 "copy": true, 00:11:40.389 "nvme_iov_md": false 00:11:40.389 }, 00:11:40.389 "memory_domains": [ 00:11:40.389 { 00:11:40.389 "dma_device_id": "system", 00:11:40.389 "dma_device_type": 1 00:11:40.389 }, 00:11:40.389 { 00:11:40.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.389 "dma_device_type": 2 00:11:40.389 } 00:11:40.389 ], 00:11:40.389 "driver_specific": {} 00:11:40.389 } 00:11:40.389 ]' 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:40.389 18:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.306 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.306 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:42.306 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.306 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:42.306 18:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:44.224 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:44.486 18:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.872 ************************************ 00:11:45.872 START TEST filesystem_in_capsule_ext4 00:11:45.872 ************************************ 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:45.872 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:45.872 mke2fs 1.47.0 (5-Feb-2023) 00:11:45.872 Discarding device blocks: 0/522240 done 00:11:45.872 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:45.872 Filesystem UUID: 5fc422d4-5c26-4f22-b1be-be110da429be 00:11:45.872 Superblock backups stored on blocks: 00:11:45.872 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:45.872 00:11:45.872 Allocating group tables: 0/64 done 00:11:45.872 Writing inode tables: 0/64 done 00:11:45.872 Creating journal (8192 blocks): done 00:11:48.105 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:11:48.105 00:11:48.105 18:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:48.105 18:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1871377 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.693 00:11:54.693 real 0m8.581s 00:11:54.693 user 0m0.028s 00:11:54.693 sys 0m0.079s 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:54.693 ************************************ 00:11:54.693 END TEST filesystem_in_capsule_ext4 00:11:54.693 ************************************ 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.693 ************************************ 00:11:54.693 START TEST filesystem_in_capsule_btrfs 00:11:54.693 ************************************ 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:54.693 btrfs-progs v6.8.1 00:11:54.693 See https://btrfs.readthedocs.io for more information. 00:11:54.693 00:11:54.693 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:54.693 NOTE: several default settings have changed in version 5.15, please make sure 00:11:54.693 this does not affect your deployments: 00:11:54.693 - DUP for metadata (-m dup) 00:11:54.693 - enabled no-holes (-O no-holes) 00:11:54.693 - enabled free-space-tree (-R free-space-tree) 00:11:54.693 00:11:54.693 Label: (null) 00:11:54.693 UUID: 498dd7a2-34bf-4b39-815d-3453593be4af 00:11:54.693 Node size: 16384 00:11:54.693 Sector size: 4096 (CPU page size: 4096) 00:11:54.693 Filesystem size: 510.00MiB 00:11:54.693 Block group profiles: 00:11:54.693 Data: single 8.00MiB 00:11:54.693 Metadata: DUP 32.00MiB 00:11:54.693 System: DUP 8.00MiB 00:11:54.693 SSD detected: yes 00:11:54.693 Zoned device: no 00:11:54.693 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:54.693 Checksum: crc32c 00:11:54.693 Number of devices: 1 00:11:54.693 Devices: 00:11:54.693 ID SIZE PATH 00:11:54.693 1 510.00MiB /dev/nvme0n1p1 00:11:54.693 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:54.693 18:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.954 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1871377 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.955 00:11:54.955 real 0m0.687s 00:11:54.955 user 0m0.029s 00:11:54.955 sys 0m0.121s 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:54.955 ************************************ 00:11:54.955 END TEST filesystem_in_capsule_btrfs 00:11:54.955 ************************************ 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.955 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.955 ************************************ 00:11:54.955 START TEST filesystem_in_capsule_xfs 00:11:54.955 ************************************ 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:55.216 18:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:55.216 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:55.216 = sectsz=512 attr=2, projid32bit=1 00:11:55.216 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:55.216 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:55.216 data = bsize=4096 blocks=130560, imaxpct=25 00:11:55.216 = sunit=0 swidth=0 blks 00:11:55.216 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:55.216 log =internal log bsize=4096 blocks=16384, version=2 00:11:55.216 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:55.216 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:56.160 Discarding blocks...Done. 00:11:56.160 18:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:56.160 18:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1871377 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.073 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.074 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.074 00:11:58.074 real 0m2.980s 00:11:58.074 user 0m0.025s 00:11:58.074 sys 0m0.080s 00:11:58.074 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.074 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:58.074 ************************************ 00:11:58.074 END TEST filesystem_in_capsule_xfs 00:11:58.074 ************************************ 00:11:58.074 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1871377 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1871377 ']' 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1871377 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1871377 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1871377' 00:11:58.335 killing process with pid 1871377 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1871377 00:11:58.335 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1871377 00:11:58.597 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:58.597 00:11:58.597 real 0m19.319s 00:11:58.597 user 1m16.430s 00:11:58.597 sys 0m1.425s 00:11:58.597 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.597 18:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.597 ************************************ 00:11:58.597 END TEST nvmf_filesystem_in_capsule 00:11:58.597 ************************************ 00:11:58.597 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:58.597 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.597 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:58.597 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.597 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:58.597 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.597 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.597 rmmod nvme_tcp 00:11:58.597 rmmod nvme_fabrics 00:11:58.597 rmmod nvme_keyring 00:11:58.858 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.858 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:58.858 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:58.858 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:58.858 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.859 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.777 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.777 00:12:00.777 real 0m49.126s 00:12:00.777 user 2m35.394s 00:12:00.777 sys 0m8.898s 00:12:00.777 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.777 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.777 ************************************ 00:12:00.777 END TEST nvmf_filesystem 00:12:00.777 ************************************ 00:12:00.777 18:11:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:00.777 18:11:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.777 18:11:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.777 18:11:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.039 ************************************ 00:12:01.039 START TEST nvmf_target_discovery 00:12:01.039 ************************************ 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:01.039 * Looking for test storage... 00:12:01.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.039 --rc genhtml_branch_coverage=1 00:12:01.039 --rc genhtml_function_coverage=1 00:12:01.039 --rc genhtml_legend=1 00:12:01.039 --rc geninfo_all_blocks=1 00:12:01.039 --rc geninfo_unexecuted_blocks=1 00:12:01.039 00:12:01.039 ' 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.039 --rc genhtml_branch_coverage=1 00:12:01.039 --rc genhtml_function_coverage=1 00:12:01.039 --rc genhtml_legend=1 00:12:01.039 --rc geninfo_all_blocks=1 00:12:01.039 --rc geninfo_unexecuted_blocks=1 00:12:01.039 00:12:01.039 ' 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.039 --rc genhtml_branch_coverage=1 00:12:01.039 --rc genhtml_function_coverage=1 00:12:01.039 --rc genhtml_legend=1 00:12:01.039 --rc geninfo_all_blocks=1 00:12:01.039 --rc geninfo_unexecuted_blocks=1 00:12:01.039 00:12:01.039 ' 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.039 --rc genhtml_branch_coverage=1 00:12:01.039 --rc genhtml_function_coverage=1 00:12:01.039 --rc genhtml_legend=1 00:12:01.039 --rc geninfo_all_blocks=1 00:12:01.039 --rc geninfo_unexecuted_blocks=1 00:12:01.039 00:12:01.039 ' 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.039 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.040 18:11:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.191 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.191 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.191 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.191 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.191 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.191 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.191 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:09.192 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:09.192 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:09.192 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:09.192 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.192 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:12:09.193 00:12:09.193 --- 10.0.0.2 ping statistics --- 00:12:09.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.193 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:12:09.193 00:12:09.193 --- 10.0.0.1 ping statistics --- 00:12:09.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.193 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1879501 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1879501 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1879501 ']' 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.193 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 [2024-11-19 18:11:10.046170] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:12:09.193 [2024-11-19 18:11:10.046244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.193 [2024-11-19 18:11:10.120353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.193 [2024-11-19 18:11:10.172370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.193 [2024-11-19 18:11:10.172421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.193 [2024-11-19 18:11:10.172428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.193 [2024-11-19 18:11:10.172434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.193 [2024-11-19 18:11:10.172439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.193 [2024-11-19 18:11:10.174210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.193 [2024-11-19 18:11:10.174429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.193 [2024-11-19 18:11:10.174590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.193 [2024-11-19 18:11:10.174591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 [2024-11-19 18:11:10.336690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 Null1 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 [2024-11-19 18:11:10.397172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.193 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 Null2 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 Null3 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 Null4 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.194 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:09.457 00:12:09.457 Discovery Log Number of Records 6, Generation counter 6 00:12:09.457 =====Discovery Log Entry 0====== 00:12:09.457 trtype: tcp 00:12:09.457 adrfam: ipv4 00:12:09.457 subtype: current discovery subsystem 00:12:09.457 treq: not required 00:12:09.457 portid: 0 00:12:09.457 trsvcid: 4420 00:12:09.457 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:09.457 traddr: 10.0.0.2 00:12:09.457 eflags: explicit discovery connections, duplicate discovery information 00:12:09.457 sectype: none 00:12:09.457 =====Discovery Log Entry 1====== 00:12:09.457 trtype: tcp 00:12:09.457 adrfam: ipv4 00:12:09.457 subtype: nvme subsystem 00:12:09.457 treq: not required 00:12:09.457 portid: 0 00:12:09.457 trsvcid: 4420 00:12:09.457 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:09.457 traddr: 10.0.0.2 00:12:09.457 eflags: none 00:12:09.457 sectype: none 00:12:09.457 =====Discovery Log Entry 2====== 00:12:09.457 trtype: tcp 00:12:09.457 adrfam: ipv4 00:12:09.457 subtype: nvme subsystem 00:12:09.457 treq: not required 00:12:09.457 portid: 0 00:12:09.457 trsvcid: 4420 00:12:09.457 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:09.457 traddr: 10.0.0.2 00:12:09.457 eflags: none 00:12:09.457 sectype: none 00:12:09.457 =====Discovery Log Entry 3====== 00:12:09.457 trtype: tcp 00:12:09.457 adrfam: ipv4 00:12:09.457 subtype: nvme subsystem 00:12:09.457 treq: not required 00:12:09.457 portid: 0 00:12:09.457 trsvcid: 4420 00:12:09.457 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:09.457 traddr: 10.0.0.2 00:12:09.457 eflags: none 00:12:09.457 sectype: none 00:12:09.457 =====Discovery Log Entry 4====== 00:12:09.457 trtype: tcp 00:12:09.457 adrfam: ipv4 00:12:09.457 subtype: nvme subsystem 00:12:09.457 treq: not required 00:12:09.457 portid: 0 00:12:09.457 trsvcid: 4420 00:12:09.457 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:09.457 traddr: 10.0.0.2 00:12:09.457 eflags: none 00:12:09.457 sectype: none 00:12:09.457 =====Discovery Log Entry 5====== 00:12:09.457 trtype: tcp 00:12:09.457 adrfam: ipv4 00:12:09.457 subtype: discovery subsystem referral 00:12:09.457 treq: not required 00:12:09.457 portid: 0 00:12:09.457 trsvcid: 4430 00:12:09.457 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:09.457 traddr: 10.0.0.2 00:12:09.457 eflags: none 00:12:09.457 sectype: none 00:12:09.457 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:09.457 Perform nvmf subsystem discovery via RPC 00:12:09.457 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:09.457 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.457 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.457 [ 00:12:09.457 { 00:12:09.457 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:09.457 "subtype": "Discovery", 00:12:09.457 "listen_addresses": [ 00:12:09.457 { 00:12:09.457 "trtype": "TCP", 00:12:09.457 "adrfam": "IPv4", 00:12:09.457 "traddr": "10.0.0.2", 00:12:09.457 "trsvcid": "4420" 00:12:09.457 } 00:12:09.457 ], 00:12:09.457 "allow_any_host": true, 00:12:09.457 "hosts": [] 00:12:09.457 }, 00:12:09.457 { 00:12:09.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.457 "subtype": "NVMe", 00:12:09.457 "listen_addresses": [ 00:12:09.457 { 00:12:09.457 "trtype": "TCP", 00:12:09.457 "adrfam": "IPv4", 00:12:09.457 "traddr": "10.0.0.2", 00:12:09.457 "trsvcid": "4420" 00:12:09.457 } 00:12:09.457 ], 00:12:09.457 "allow_any_host": true, 00:12:09.457 "hosts": [], 00:12:09.457 "serial_number": "SPDK00000000000001", 00:12:09.457 "model_number": "SPDK bdev Controller", 00:12:09.457 "max_namespaces": 32, 00:12:09.457 "min_cntlid": 1, 00:12:09.457 "max_cntlid": 65519, 00:12:09.457 "namespaces": [ 00:12:09.457 { 00:12:09.457 "nsid": 1, 00:12:09.457 "bdev_name": "Null1", 00:12:09.457 "name": "Null1", 00:12:09.457 "nguid": "41D04F73187442D482CF37D7A5DD311A", 00:12:09.457 "uuid": "41d04f73-1874-42d4-82cf-37d7a5dd311a" 00:12:09.457 } 00:12:09.457 ] 00:12:09.457 }, 00:12:09.457 { 00:12:09.457 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:09.457 "subtype": "NVMe", 00:12:09.457 "listen_addresses": [ 00:12:09.457 { 00:12:09.457 "trtype": "TCP", 00:12:09.457 "adrfam": "IPv4", 00:12:09.457 "traddr": "10.0.0.2", 00:12:09.457 "trsvcid": "4420" 00:12:09.457 } 00:12:09.457 ], 00:12:09.458 "allow_any_host": true, 00:12:09.458 "hosts": [], 00:12:09.458 "serial_number": "SPDK00000000000002", 00:12:09.458 "model_number": "SPDK bdev Controller", 00:12:09.458 "max_namespaces": 32, 00:12:09.458 "min_cntlid": 1, 00:12:09.458 "max_cntlid": 65519, 00:12:09.458 "namespaces": [ 00:12:09.458 { 00:12:09.458 "nsid": 1, 00:12:09.458 "bdev_name": "Null2", 00:12:09.458 "name": "Null2", 00:12:09.458 "nguid": "67647A4AD0834E2FA951A9FBF7DA4C99", 00:12:09.458 "uuid": "67647a4a-d083-4e2f-a951-a9fbf7da4c99" 00:12:09.458 } 00:12:09.458 ] 00:12:09.458 }, 00:12:09.458 { 00:12:09.458 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:09.458 "subtype": "NVMe", 00:12:09.458 "listen_addresses": [ 00:12:09.458 { 00:12:09.458 "trtype": "TCP", 00:12:09.458 "adrfam": "IPv4", 00:12:09.458 "traddr": "10.0.0.2", 00:12:09.458 "trsvcid": "4420" 00:12:09.458 } 00:12:09.458 ], 00:12:09.458 "allow_any_host": true, 00:12:09.458 "hosts": [], 00:12:09.458 "serial_number": "SPDK00000000000003", 00:12:09.458 "model_number": "SPDK bdev Controller", 00:12:09.458 "max_namespaces": 32, 00:12:09.458 "min_cntlid": 1, 00:12:09.458 "max_cntlid": 65519, 00:12:09.458 "namespaces": [ 00:12:09.458 { 00:12:09.458 "nsid": 1, 00:12:09.458 "bdev_name": "Null3", 00:12:09.458 "name": "Null3", 00:12:09.458 "nguid": "B51FE68F576D4006A2F272DCEE00C499", 00:12:09.458 "uuid": "b51fe68f-576d-4006-a2f2-72dcee00c499" 00:12:09.458 } 00:12:09.458 ] 00:12:09.458 }, 00:12:09.458 { 00:12:09.458 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:09.458 "subtype": "NVMe", 00:12:09.458 "listen_addresses": [ 00:12:09.458 { 00:12:09.458 "trtype": "TCP", 00:12:09.458 "adrfam": "IPv4", 00:12:09.458 "traddr": "10.0.0.2", 00:12:09.458 "trsvcid": "4420" 00:12:09.458 } 00:12:09.458 ], 00:12:09.458 "allow_any_host": true, 00:12:09.458 "hosts": [], 00:12:09.458 "serial_number": "SPDK00000000000004", 00:12:09.458 "model_number": "SPDK bdev Controller", 00:12:09.458 "max_namespaces": 32, 00:12:09.458 "min_cntlid": 1, 00:12:09.458 "max_cntlid": 65519, 00:12:09.458 "namespaces": [ 00:12:09.458 { 00:12:09.458 "nsid": 1, 00:12:09.458 "bdev_name": "Null4", 00:12:09.458 "name": "Null4", 00:12:09.458 "nguid": "3FD8B712EF83477BA1F1D408B4B8D55C", 00:12:09.458 "uuid": "3fd8b712-ef83-477b-a1f1-d408b4b8d55c" 00:12:09.458 } 00:12:09.458 ] 00:12:09.458 } 00:12:09.458 ] 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.458 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.720 18:11:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.720 rmmod nvme_tcp 00:12:09.720 rmmod nvme_fabrics 00:12:09.720 rmmod nvme_keyring 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1879501 ']' 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1879501 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1879501 ']' 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1879501 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1879501 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1879501' 00:12:09.720 killing process with pid 1879501 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1879501 00:12:09.720 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1879501 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.982 18:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.898 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:11.898 00:12:11.898 real 0m11.112s 00:12:11.898 user 0m6.678s 00:12:11.898 sys 0m6.076s 00:12:11.898 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.898 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:11.898 ************************************ 00:12:11.898 END TEST nvmf_target_discovery 00:12:11.898 ************************************ 00:12:12.160 18:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.160 18:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.160 18:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.160 18:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.160 ************************************ 00:12:12.160 START TEST nvmf_referrals 00:12:12.160 ************************************ 00:12:12.160 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.160 * Looking for test storage... 00:12:12.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.160 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:12.160 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:12.160 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:12.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.423 --rc genhtml_branch_coverage=1 00:12:12.423 --rc genhtml_function_coverage=1 00:12:12.423 --rc genhtml_legend=1 00:12:12.423 --rc geninfo_all_blocks=1 00:12:12.423 --rc geninfo_unexecuted_blocks=1 00:12:12.423 00:12:12.423 ' 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:12.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.423 --rc genhtml_branch_coverage=1 00:12:12.423 --rc genhtml_function_coverage=1 00:12:12.423 --rc genhtml_legend=1 00:12:12.423 --rc geninfo_all_blocks=1 00:12:12.423 --rc geninfo_unexecuted_blocks=1 00:12:12.423 00:12:12.423 ' 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:12.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.423 --rc genhtml_branch_coverage=1 00:12:12.423 --rc genhtml_function_coverage=1 00:12:12.423 --rc genhtml_legend=1 00:12:12.423 --rc geninfo_all_blocks=1 00:12:12.423 --rc geninfo_unexecuted_blocks=1 00:12:12.423 00:12:12.423 ' 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:12.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.423 --rc genhtml_branch_coverage=1 00:12:12.423 --rc genhtml_function_coverage=1 00:12:12.423 --rc genhtml_legend=1 00:12:12.423 --rc geninfo_all_blocks=1 00:12:12.423 --rc geninfo_unexecuted_blocks=1 00:12:12.423 00:12:12.423 ' 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.423 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.424 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:20.576 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:20.576 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:20.576 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:20.576 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.576 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.577 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.577 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.577 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.577 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.577 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.577 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:12:20.577 00:12:20.577 --- 10.0.0.2 ping statistics --- 00:12:20.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.577 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:12:20.577 00:12:20.577 --- 10.0.0.1 ping statistics --- 00:12:20.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.577 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1883975 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1883975 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1883975 ']' 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.577 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.577 [2024-11-19 18:11:21.310524] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:12:20.577 [2024-11-19 18:11:21.310595] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.577 [2024-11-19 18:11:21.410519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.577 [2024-11-19 18:11:21.464706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.577 [2024-11-19 18:11:21.464757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.577 [2024-11-19 18:11:21.464766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.577 [2024-11-19 18:11:21.464774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.577 [2024-11-19 18:11:21.464780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.577 [2024-11-19 18:11:21.466824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.577 [2024-11-19 18:11:21.466983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.577 [2024-11-19 18:11:21.467151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.577 [2024-11-19 18:11:21.467152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.839 [2024-11-19 18:11:22.183367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.839 [2024-11-19 18:11:22.199667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.839 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.101 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.363 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.363 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:21.363 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:21.363 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:21.363 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:21.363 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.363 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:21.363 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:21.626 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.889 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:22.150 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:22.151 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.411 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:22.671 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.932 rmmod nvme_tcp 00:12:22.932 rmmod nvme_fabrics 00:12:22.932 rmmod nvme_keyring 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1883975 ']' 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1883975 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1883975 ']' 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1883975 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.932 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1883975 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1883975' 00:12:23.194 killing process with pid 1883975 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1883975 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1883975 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.194 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.745 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:25.745 00:12:25.745 real 0m13.157s 00:12:25.745 user 0m15.459s 00:12:25.745 sys 0m6.512s 00:12:25.745 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.745 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.745 ************************************ 00:12:25.745 END TEST nvmf_referrals 00:12:25.745 ************************************ 00:12:25.745 18:11:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:25.745 18:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.746 ************************************ 00:12:25.746 START TEST nvmf_connect_disconnect 00:12:25.746 ************************************ 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:25.746 * Looking for test storage... 00:12:25.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.746 --rc genhtml_branch_coverage=1 00:12:25.746 --rc genhtml_function_coverage=1 00:12:25.746 --rc genhtml_legend=1 00:12:25.746 --rc geninfo_all_blocks=1 00:12:25.746 --rc geninfo_unexecuted_blocks=1 00:12:25.746 00:12:25.746 ' 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.746 --rc genhtml_branch_coverage=1 00:12:25.746 --rc genhtml_function_coverage=1 00:12:25.746 --rc genhtml_legend=1 00:12:25.746 --rc geninfo_all_blocks=1 00:12:25.746 --rc geninfo_unexecuted_blocks=1 00:12:25.746 00:12:25.746 ' 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.746 --rc genhtml_branch_coverage=1 00:12:25.746 --rc genhtml_function_coverage=1 00:12:25.746 --rc genhtml_legend=1 00:12:25.746 --rc geninfo_all_blocks=1 00:12:25.746 --rc geninfo_unexecuted_blocks=1 00:12:25.746 00:12:25.746 ' 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.746 --rc genhtml_branch_coverage=1 00:12:25.746 --rc genhtml_function_coverage=1 00:12:25.746 --rc genhtml_legend=1 00:12:25.746 --rc geninfo_all_blocks=1 00:12:25.746 --rc geninfo_unexecuted_blocks=1 00:12:25.746 00:12:25.746 ' 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.746 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.747 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.898 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:33.899 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:33.899 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:33.899 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:33.899 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:12:33.899 00:12:33.899 --- 10.0.0.2 ping statistics --- 00:12:33.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.899 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:12:33.899 00:12:33.899 --- 10.0.0.1 ping statistics --- 00:12:33.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.899 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1888965 00:12:33.899 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1888965 00:12:33.900 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.900 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1888965 ']' 00:12:33.900 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.900 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.900 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.900 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.900 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.900 [2024-11-19 18:11:34.486410] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:12:33.900 [2024-11-19 18:11:34.486479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.900 [2024-11-19 18:11:34.586833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.900 [2024-11-19 18:11:34.640008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.900 [2024-11-19 18:11:34.640061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.900 [2024-11-19 18:11:34.640070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.900 [2024-11-19 18:11:34.640077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.900 [2024-11-19 18:11:34.640084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.900 [2024-11-19 18:11:34.642143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.900 [2024-11-19 18:11:34.642303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.900 [2024-11-19 18:11:34.642357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.900 [2024-11-19 18:11:34.642357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.900 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.900 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:33.900 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.900 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.900 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.900 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.900 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:33.900 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.900 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.900 [2024-11-19 18:11:35.361985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.163 [2024-11-19 18:11:35.443821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:34.163 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:38.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.829 rmmod nvme_tcp 00:12:52.829 rmmod nvme_fabrics 00:12:52.829 rmmod nvme_keyring 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1888965 ']' 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1888965 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1888965 ']' 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1888965 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1888965 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1888965' 00:12:52.829 killing process with pid 1888965 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1888965 00:12:52.829 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1888965 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.829 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.741 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.741 00:12:54.741 real 0m29.458s 00:12:54.741 user 1m19.420s 00:12:54.741 sys 0m7.255s 00:12:54.741 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.741 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:54.741 ************************************ 00:12:54.741 END TEST nvmf_connect_disconnect 00:12:54.741 ************************************ 00:12:54.741 18:11:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:54.741 18:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.741 18:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.741 18:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.002 ************************************ 00:12:55.002 START TEST nvmf_multitarget 00:12:55.002 ************************************ 00:12:55.002 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:55.002 * Looking for test storage... 00:12:55.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.003 --rc genhtml_branch_coverage=1 00:12:55.003 --rc genhtml_function_coverage=1 00:12:55.003 --rc genhtml_legend=1 00:12:55.003 --rc geninfo_all_blocks=1 00:12:55.003 --rc geninfo_unexecuted_blocks=1 00:12:55.003 00:12:55.003 ' 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.003 --rc genhtml_branch_coverage=1 00:12:55.003 --rc genhtml_function_coverage=1 00:12:55.003 --rc genhtml_legend=1 00:12:55.003 --rc geninfo_all_blocks=1 00:12:55.003 --rc geninfo_unexecuted_blocks=1 00:12:55.003 00:12:55.003 ' 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.003 --rc genhtml_branch_coverage=1 00:12:55.003 --rc genhtml_function_coverage=1 00:12:55.003 --rc genhtml_legend=1 00:12:55.003 --rc geninfo_all_blocks=1 00:12:55.003 --rc geninfo_unexecuted_blocks=1 00:12:55.003 00:12:55.003 ' 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.003 --rc genhtml_branch_coverage=1 00:12:55.003 --rc genhtml_function_coverage=1 00:12:55.003 --rc genhtml_legend=1 00:12:55.003 --rc geninfo_all_blocks=1 00:12:55.003 --rc geninfo_unexecuted_blocks=1 00:12:55.003 00:12:55.003 ' 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.003 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.004 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.265 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:55.265 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:55.265 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.265 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:03.407 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:03.407 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:03.407 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:03.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:13:03.407 00:13:03.407 --- 10.0.0.2 ping statistics --- 00:13:03.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.407 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:13:03.407 00:13:03.407 --- 10.0.0.1 ping statistics --- 00:13:03.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.407 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1897137 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1897137 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1897137 ']' 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.407 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.407 [2024-11-19 18:12:04.048423] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:13:03.407 [2024-11-19 18:12:04.048487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.407 [2024-11-19 18:12:04.147625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.407 [2024-11-19 18:12:04.200895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.407 [2024-11-19 18:12:04.200946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.407 [2024-11-19 18:12:04.200955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.407 [2024-11-19 18:12:04.200963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.407 [2024-11-19 18:12:04.200970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.407 [2024-11-19 18:12:04.202992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.407 [2024-11-19 18:12:04.203151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.407 [2024-11-19 18:12:04.203202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.407 [2024-11-19 18:12:04.203257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.407 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.407 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:03.407 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.407 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.407 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:03.668 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.668 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:03.668 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:03.668 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:03.668 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:03.668 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:03.668 "nvmf_tgt_1" 00:13:03.929 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:03.929 "nvmf_tgt_2" 00:13:03.929 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:03.929 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:03.929 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:03.929 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:04.189 true 00:13:04.189 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:04.189 true 00:13:04.190 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:04.190 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.451 rmmod nvme_tcp 00:13:04.451 rmmod nvme_fabrics 00:13:04.451 rmmod nvme_keyring 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1897137 ']' 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1897137 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1897137 ']' 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1897137 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1897137 00:13:04.451 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.452 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.452 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1897137' 00:13:04.452 killing process with pid 1897137 00:13:04.452 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1897137 00:13:04.452 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1897137 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.713 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.262 00:13:07.262 real 0m11.884s 00:13:07.262 user 0m10.225s 00:13:07.262 sys 0m6.245s 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:07.262 ************************************ 00:13:07.262 END TEST nvmf_multitarget 00:13:07.262 ************************************ 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.262 ************************************ 00:13:07.262 START TEST nvmf_rpc 00:13:07.262 ************************************ 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:07.262 * Looking for test storage... 00:13:07.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.262 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.263 --rc genhtml_branch_coverage=1 00:13:07.263 --rc genhtml_function_coverage=1 00:13:07.263 --rc genhtml_legend=1 00:13:07.263 --rc geninfo_all_blocks=1 00:13:07.263 --rc geninfo_unexecuted_blocks=1 00:13:07.263 00:13:07.263 ' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.263 --rc genhtml_branch_coverage=1 00:13:07.263 --rc genhtml_function_coverage=1 00:13:07.263 --rc genhtml_legend=1 00:13:07.263 --rc geninfo_all_blocks=1 00:13:07.263 --rc geninfo_unexecuted_blocks=1 00:13:07.263 00:13:07.263 ' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.263 --rc genhtml_branch_coverage=1 00:13:07.263 --rc genhtml_function_coverage=1 00:13:07.263 --rc genhtml_legend=1 00:13:07.263 --rc geninfo_all_blocks=1 00:13:07.263 --rc geninfo_unexecuted_blocks=1 00:13:07.263 00:13:07.263 ' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.263 --rc genhtml_branch_coverage=1 00:13:07.263 --rc genhtml_function_coverage=1 00:13:07.263 --rc genhtml_legend=1 00:13:07.263 --rc geninfo_all_blocks=1 00:13:07.263 --rc geninfo_unexecuted_blocks=1 00:13:07.263 00:13:07.263 ' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.263 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:15.431 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.431 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:15.432 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:15.432 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:15.432 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:13:15.432 00:13:15.432 --- 10.0.0.2 ping statistics --- 00:13:15.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.432 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:13:15.432 00:13:15.432 --- 10.0.0.1 ping statistics --- 00:13:15.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.432 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1902061 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1902061 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1902061 ']' 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.432 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.433 [2024-11-19 18:12:15.972173] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:13:15.433 [2024-11-19 18:12:15.972241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.433 [2024-11-19 18:12:16.072529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.433 [2024-11-19 18:12:16.125894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.433 [2024-11-19 18:12:16.125950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.433 [2024-11-19 18:12:16.125959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.433 [2024-11-19 18:12:16.125966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.433 [2024-11-19 18:12:16.125978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.433 [2024-11-19 18:12:16.128054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.433 [2024-11-19 18:12:16.128217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.433 [2024-11-19 18:12:16.128379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.433 [2024-11-19 18:12:16.128382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:15.433 "tick_rate": 2400000000, 00:13:15.433 "poll_groups": [ 00:13:15.433 { 00:13:15.433 "name": "nvmf_tgt_poll_group_000", 00:13:15.433 "admin_qpairs": 0, 00:13:15.433 "io_qpairs": 0, 00:13:15.433 "current_admin_qpairs": 0, 00:13:15.433 "current_io_qpairs": 0, 00:13:15.433 "pending_bdev_io": 0, 00:13:15.433 "completed_nvme_io": 0, 00:13:15.433 "transports": [] 00:13:15.433 }, 00:13:15.433 { 00:13:15.433 "name": "nvmf_tgt_poll_group_001", 00:13:15.433 "admin_qpairs": 0, 00:13:15.433 "io_qpairs": 0, 00:13:15.433 "current_admin_qpairs": 0, 00:13:15.433 "current_io_qpairs": 0, 00:13:15.433 "pending_bdev_io": 0, 00:13:15.433 "completed_nvme_io": 0, 00:13:15.433 "transports": [] 00:13:15.433 }, 00:13:15.433 { 00:13:15.433 "name": "nvmf_tgt_poll_group_002", 00:13:15.433 "admin_qpairs": 0, 00:13:15.433 "io_qpairs": 0, 00:13:15.433 "current_admin_qpairs": 0, 00:13:15.433 "current_io_qpairs": 0, 00:13:15.433 "pending_bdev_io": 0, 00:13:15.433 "completed_nvme_io": 0, 00:13:15.433 "transports": [] 00:13:15.433 }, 00:13:15.433 { 00:13:15.433 "name": "nvmf_tgt_poll_group_003", 00:13:15.433 "admin_qpairs": 0, 00:13:15.433 "io_qpairs": 0, 00:13:15.433 "current_admin_qpairs": 0, 00:13:15.433 "current_io_qpairs": 0, 00:13:15.433 "pending_bdev_io": 0, 00:13:15.433 "completed_nvme_io": 0, 00:13:15.433 "transports": [] 00:13:15.433 } 00:13:15.433 ] 00:13:15.433 }' 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:15.433 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.695 [2024-11-19 18:12:16.965097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:15.695 "tick_rate": 2400000000, 00:13:15.695 "poll_groups": [ 00:13:15.695 { 00:13:15.695 "name": "nvmf_tgt_poll_group_000", 00:13:15.695 "admin_qpairs": 0, 00:13:15.695 "io_qpairs": 0, 00:13:15.695 "current_admin_qpairs": 0, 00:13:15.695 "current_io_qpairs": 0, 00:13:15.695 "pending_bdev_io": 0, 00:13:15.695 "completed_nvme_io": 0, 00:13:15.695 "transports": [ 00:13:15.695 { 00:13:15.695 "trtype": "TCP" 00:13:15.695 } 00:13:15.695 ] 00:13:15.695 }, 00:13:15.695 { 00:13:15.695 "name": "nvmf_tgt_poll_group_001", 00:13:15.695 "admin_qpairs": 0, 00:13:15.695 "io_qpairs": 0, 00:13:15.695 "current_admin_qpairs": 0, 00:13:15.695 "current_io_qpairs": 0, 00:13:15.695 "pending_bdev_io": 0, 00:13:15.695 "completed_nvme_io": 0, 00:13:15.695 "transports": [ 00:13:15.695 { 00:13:15.695 "trtype": "TCP" 00:13:15.695 } 00:13:15.695 ] 00:13:15.695 }, 00:13:15.695 { 00:13:15.695 "name": "nvmf_tgt_poll_group_002", 00:13:15.695 "admin_qpairs": 0, 00:13:15.695 "io_qpairs": 0, 00:13:15.695 "current_admin_qpairs": 0, 00:13:15.695 "current_io_qpairs": 0, 00:13:15.695 "pending_bdev_io": 0, 00:13:15.695 "completed_nvme_io": 0, 00:13:15.695 "transports": [ 00:13:15.695 { 00:13:15.695 "trtype": "TCP" 00:13:15.695 } 00:13:15.695 ] 00:13:15.695 }, 00:13:15.695 { 00:13:15.695 "name": "nvmf_tgt_poll_group_003", 00:13:15.695 "admin_qpairs": 0, 00:13:15.695 "io_qpairs": 0, 00:13:15.695 "current_admin_qpairs": 0, 00:13:15.695 "current_io_qpairs": 0, 00:13:15.695 "pending_bdev_io": 0, 00:13:15.695 "completed_nvme_io": 0, 00:13:15.695 "transports": [ 00:13:15.695 { 00:13:15.695 "trtype": "TCP" 00:13:15.695 } 00:13:15.695 ] 00:13:15.695 } 00:13:15.695 ] 00:13:15.695 }' 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:15.695 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.695 Malloc1 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.695 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 [2024-11-19 18:12:17.168247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:15.957 [2024-11-19 18:12:17.205344] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:15.957 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:15.957 could not add new controller: failed to write to nvme-fabrics device 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.957 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.872 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.872 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:17.872 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.872 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:17.872 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:19.785 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.786 [2024-11-19 18:12:20.973370] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:19.786 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:19.786 could not add new controller: failed to write to nvme-fabrics device 00:13:19.786 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:19.786 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.786 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.786 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.786 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:19.786 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.786 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.786 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.786 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.172 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.172 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:21.172 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.172 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:21.172 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 [2024-11-19 18:12:24.732035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.719 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.103 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.103 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:25.103 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.103 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:25.103 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.016 [2024-11-19 18:12:28.459564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.016 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.277 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.277 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.662 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:28.662 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:28.662 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:28.662 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:28.662 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.208 [2024-11-19 18:12:32.264141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.208 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.592 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.592 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:32.592 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.592 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:32.592 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.504 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.764 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.765 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:34.765 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.765 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.765 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.765 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.765 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.765 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.765 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.765 [2024-11-19 18:12:35.999854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.765 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.150 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.150 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:36.150 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.151 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:36.151 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:38.063 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:38.063 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:38.063 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.063 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:38.063 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.063 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:38.063 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.324 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.586 [2024-11-19 18:12:39.819111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.586 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.971 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.971 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:39.971 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.971 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:39.971 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.511 [2024-11-19 18:12:43.577059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.511 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 [2024-11-19 18:12:43.641217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 [2024-11-19 18:12:43.709432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 [2024-11-19 18:12:43.781651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 [2024-11-19 18:12:43.849877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:42.512 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:42.513 "tick_rate": 2400000000, 00:13:42.513 "poll_groups": [ 00:13:42.513 { 00:13:42.513 "name": "nvmf_tgt_poll_group_000", 00:13:42.513 "admin_qpairs": 0, 00:13:42.513 "io_qpairs": 224, 00:13:42.513 "current_admin_qpairs": 0, 00:13:42.513 "current_io_qpairs": 0, 00:13:42.513 "pending_bdev_io": 0, 00:13:42.513 "completed_nvme_io": 430, 00:13:42.513 "transports": [ 00:13:42.513 { 00:13:42.513 "trtype": "TCP" 00:13:42.513 } 00:13:42.513 ] 00:13:42.513 }, 00:13:42.513 { 00:13:42.513 "name": "nvmf_tgt_poll_group_001", 00:13:42.513 "admin_qpairs": 1, 00:13:42.513 "io_qpairs": 223, 00:13:42.513 "current_admin_qpairs": 0, 00:13:42.513 "current_io_qpairs": 0, 00:13:42.513 "pending_bdev_io": 0, 00:13:42.513 "completed_nvme_io": 314, 00:13:42.513 "transports": [ 00:13:42.513 { 00:13:42.513 "trtype": "TCP" 00:13:42.513 } 00:13:42.513 ] 00:13:42.513 }, 00:13:42.513 { 00:13:42.513 "name": "nvmf_tgt_poll_group_002", 00:13:42.513 "admin_qpairs": 6, 00:13:42.513 "io_qpairs": 218, 00:13:42.513 "current_admin_qpairs": 0, 00:13:42.513 "current_io_qpairs": 0, 00:13:42.513 "pending_bdev_io": 0, 00:13:42.513 "completed_nvme_io": 222, 00:13:42.513 "transports": [ 00:13:42.513 { 00:13:42.513 "trtype": "TCP" 00:13:42.513 } 00:13:42.513 ] 00:13:42.513 }, 00:13:42.513 { 00:13:42.513 "name": "nvmf_tgt_poll_group_003", 00:13:42.513 "admin_qpairs": 0, 00:13:42.513 "io_qpairs": 224, 00:13:42.513 "current_admin_qpairs": 0, 00:13:42.513 "current_io_qpairs": 0, 00:13:42.513 "pending_bdev_io": 0, 00:13:42.513 "completed_nvme_io": 273, 00:13:42.513 "transports": [ 00:13:42.513 { 00:13:42.513 "trtype": "TCP" 00:13:42.513 } 00:13:42.513 ] 00:13:42.513 } 00:13:42.513 ] 00:13:42.513 }' 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:42.513 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.774 rmmod nvme_tcp 00:13:42.774 rmmod nvme_fabrics 00:13:42.774 rmmod nvme_keyring 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1902061 ']' 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1902061 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1902061 ']' 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1902061 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1902061 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1902061' 00:13:42.774 killing process with pid 1902061 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1902061 00:13:42.774 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1902061 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.036 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.950 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.950 00:13:44.950 real 0m38.156s 00:13:44.950 user 1m54.614s 00:13:44.950 sys 0m7.787s 00:13:44.950 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.950 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.950 ************************************ 00:13:44.950 END TEST nvmf_rpc 00:13:44.950 ************************************ 00:13:44.950 18:12:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:44.950 18:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:44.950 18:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.950 18:12:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.211 ************************************ 00:13:45.211 START TEST nvmf_invalid 00:13:45.211 ************************************ 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:45.211 * Looking for test storage... 00:13:45.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.211 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.212 --rc genhtml_branch_coverage=1 00:13:45.212 --rc genhtml_function_coverage=1 00:13:45.212 --rc genhtml_legend=1 00:13:45.212 --rc geninfo_all_blocks=1 00:13:45.212 --rc geninfo_unexecuted_blocks=1 00:13:45.212 00:13:45.212 ' 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.212 --rc genhtml_branch_coverage=1 00:13:45.212 --rc genhtml_function_coverage=1 00:13:45.212 --rc genhtml_legend=1 00:13:45.212 --rc geninfo_all_blocks=1 00:13:45.212 --rc geninfo_unexecuted_blocks=1 00:13:45.212 00:13:45.212 ' 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.212 --rc genhtml_branch_coverage=1 00:13:45.212 --rc genhtml_function_coverage=1 00:13:45.212 --rc genhtml_legend=1 00:13:45.212 --rc geninfo_all_blocks=1 00:13:45.212 --rc geninfo_unexecuted_blocks=1 00:13:45.212 00:13:45.212 ' 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.212 --rc genhtml_branch_coverage=1 00:13:45.212 --rc genhtml_function_coverage=1 00:13:45.212 --rc genhtml_legend=1 00:13:45.212 --rc geninfo_all_blocks=1 00:13:45.212 --rc geninfo_unexecuted_blocks=1 00:13:45.212 00:13:45.212 ' 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.212 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.213 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.474 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:45.474 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:45.474 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.474 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:53.615 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:53.615 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:53.615 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:53.615 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:53.615 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:13:53.616 00:13:53.616 --- 10.0.0.2 ping statistics --- 00:13:53.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.616 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:13:53.616 00:13:53.616 --- 10.0.0.1 ping statistics --- 00:13:53.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.616 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1911934 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1911934 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1911934 ']' 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.616 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:53.616 [2024-11-19 18:12:54.195000] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:13:53.616 [2024-11-19 18:12:54.195095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.616 [2024-11-19 18:12:54.293921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.616 [2024-11-19 18:12:54.346794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.616 [2024-11-19 18:12:54.346845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.616 [2024-11-19 18:12:54.346854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.616 [2024-11-19 18:12:54.346861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.616 [2024-11-19 18:12:54.346867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.616 [2024-11-19 18:12:54.348904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.616 [2024-11-19 18:12:54.349068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.616 [2024-11-19 18:12:54.349231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.616 [2024-11-19 18:12:54.349231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.616 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.616 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:53.616 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.616 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.616 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:53.616 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.616 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:53.616 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25134 00:13:53.878 [2024-11-19 18:12:55.225446] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:53.878 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:53.878 { 00:13:53.878 "nqn": "nqn.2016-06.io.spdk:cnode25134", 00:13:53.878 "tgt_name": "foobar", 00:13:53.878 "method": "nvmf_create_subsystem", 00:13:53.878 "req_id": 1 00:13:53.878 } 00:13:53.878 Got JSON-RPC error response 00:13:53.878 response: 00:13:53.878 { 00:13:53.878 "code": -32603, 00:13:53.878 "message": "Unable to find target foobar" 00:13:53.878 }' 00:13:53.878 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:53.878 { 00:13:53.878 "nqn": "nqn.2016-06.io.spdk:cnode25134", 00:13:53.878 "tgt_name": "foobar", 00:13:53.878 "method": "nvmf_create_subsystem", 00:13:53.878 "req_id": 1 00:13:53.878 } 00:13:53.878 Got JSON-RPC error response 00:13:53.878 response: 00:13:53.878 { 00:13:53.878 "code": -32603, 00:13:53.878 "message": "Unable to find target foobar" 00:13:53.878 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:53.878 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:53.878 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10725 00:13:54.140 [2024-11-19 18:12:55.434367] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10725: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:54.140 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:54.140 { 00:13:54.140 "nqn": "nqn.2016-06.io.spdk:cnode10725", 00:13:54.140 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:54.140 "method": "nvmf_create_subsystem", 00:13:54.140 "req_id": 1 00:13:54.140 } 00:13:54.140 Got JSON-RPC error response 00:13:54.140 response: 00:13:54.140 { 00:13:54.140 "code": -32602, 00:13:54.140 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:54.140 }' 00:13:54.140 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:54.140 { 00:13:54.140 "nqn": "nqn.2016-06.io.spdk:cnode10725", 00:13:54.140 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:54.140 "method": "nvmf_create_subsystem", 00:13:54.140 "req_id": 1 00:13:54.140 } 00:13:54.140 Got JSON-RPC error response 00:13:54.140 response: 00:13:54.140 { 00:13:54.140 "code": -32602, 00:13:54.140 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:54.140 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:54.140 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:54.140 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20928 00:13:54.403 [2024-11-19 18:12:55.643095] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20928: invalid model number 'SPDK_Controller' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:54.403 { 00:13:54.403 "nqn": "nqn.2016-06.io.spdk:cnode20928", 00:13:54.403 "model_number": "SPDK_Controller\u001f", 00:13:54.403 "method": "nvmf_create_subsystem", 00:13:54.403 "req_id": 1 00:13:54.403 } 00:13:54.403 Got JSON-RPC error response 00:13:54.403 response: 00:13:54.403 { 00:13:54.403 "code": -32602, 00:13:54.403 "message": "Invalid MN SPDK_Controller\u001f" 00:13:54.403 }' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:54.403 { 00:13:54.403 "nqn": "nqn.2016-06.io.spdk:cnode20928", 00:13:54.403 "model_number": "SPDK_Controller\u001f", 00:13:54.403 "method": "nvmf_create_subsystem", 00:13:54.403 "req_id": 1 00:13:54.403 } 00:13:54.403 Got JSON-RPC error response 00:13:54.403 response: 00:13:54.403 { 00:13:54.403 "code": -32602, 00:13:54.403 "message": "Invalid MN SPDK_Controller\u001f" 00:13:54.403 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.403 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'y/!B]8dEbWC#IleMWD7\|' 00:13:54.404 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'y/!B]8dEbWC#IleMWD7\|' nqn.2016-06.io.spdk:cnode14703 00:13:54.665 [2024-11-19 18:12:56.012528] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14703: invalid serial number 'y/!B]8dEbWC#IleMWD7\|' 00:13:54.665 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:54.665 { 00:13:54.665 "nqn": "nqn.2016-06.io.spdk:cnode14703", 00:13:54.665 "serial_number": "y/!B]8dEbWC#IleMWD7\\|", 00:13:54.665 "method": "nvmf_create_subsystem", 00:13:54.665 "req_id": 1 00:13:54.665 } 00:13:54.665 Got JSON-RPC error response 00:13:54.666 response: 00:13:54.666 { 00:13:54.666 "code": -32602, 00:13:54.666 "message": "Invalid SN y/!B]8dEbWC#IleMWD7\\|" 00:13:54.666 }' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:54.666 { 00:13:54.666 "nqn": "nqn.2016-06.io.spdk:cnode14703", 00:13:54.666 "serial_number": "y/!B]8dEbWC#IleMWD7\\|", 00:13:54.666 "method": "nvmf_create_subsystem", 00:13:54.666 "req_id": 1 00:13:54.666 } 00:13:54.666 Got JSON-RPC error response 00:13:54.666 response: 00:13:54.666 { 00:13:54.666 "code": -32602, 00:13:54.666 "message": "Invalid SN y/!B]8dEbWC#IleMWD7\\|" 00:13:54.666 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.666 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:54.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:54.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:54.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.929 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:13:54.930 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@29 -- # string='\-h_}ic2mAgjK|&w=!A]D'\''5T`b4?c /dev/null' 00:13:57.282 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.200 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:59.200 00:13:59.200 real 0m14.130s 00:13:59.200 user 0m21.122s 00:13:59.200 sys 0m6.761s 00:13:59.200 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.200 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:59.200 ************************************ 00:13:59.200 END TEST nvmf_invalid 00:13:59.200 ************************************ 00:13:59.200 18:13:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:59.200 18:13:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:59.200 18:13:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.200 18:13:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:59.200 ************************************ 00:13:59.200 START TEST nvmf_connect_stress 00:13:59.200 ************************************ 00:13:59.200 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:59.462 * Looking for test storage... 00:13:59.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.462 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:59.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.463 --rc genhtml_branch_coverage=1 00:13:59.463 --rc genhtml_function_coverage=1 00:13:59.463 --rc genhtml_legend=1 00:13:59.463 --rc geninfo_all_blocks=1 00:13:59.463 --rc geninfo_unexecuted_blocks=1 00:13:59.463 00:13:59.463 ' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:59.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.463 --rc genhtml_branch_coverage=1 00:13:59.463 --rc genhtml_function_coverage=1 00:13:59.463 --rc genhtml_legend=1 00:13:59.463 --rc geninfo_all_blocks=1 00:13:59.463 --rc geninfo_unexecuted_blocks=1 00:13:59.463 00:13:59.463 ' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:59.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.463 --rc genhtml_branch_coverage=1 00:13:59.463 --rc genhtml_function_coverage=1 00:13:59.463 --rc genhtml_legend=1 00:13:59.463 --rc geninfo_all_blocks=1 00:13:59.463 --rc geninfo_unexecuted_blocks=1 00:13:59.463 00:13:59.463 ' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:59.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.463 --rc genhtml_branch_coverage=1 00:13:59.463 --rc genhtml_function_coverage=1 00:13:59.463 --rc genhtml_legend=1 00:13:59.463 --rc geninfo_all_blocks=1 00:13:59.463 --rc geninfo_unexecuted_blocks=1 00:13:59.463 00:13:59.463 ' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:59.463 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.464 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.464 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.464 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:59.464 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:59.464 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:59.464 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:07.611 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:07.611 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:07.611 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:07.612 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:07.612 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:07.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:14:07.612 00:14:07.612 --- 10.0.0.2 ping statistics --- 00:14:07.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.612 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:14:07.612 00:14:07.612 --- 10.0.0.1 ping statistics --- 00:14:07.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.612 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1917114 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1917114 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1917114 ']' 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.612 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.612 [2024-11-19 18:13:08.439472] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:14:07.612 [2024-11-19 18:13:08.439539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.612 [2024-11-19 18:13:08.539178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:07.612 [2024-11-19 18:13:08.590278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.612 [2024-11-19 18:13:08.590324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.612 [2024-11-19 18:13:08.590333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.612 [2024-11-19 18:13:08.590340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.612 [2024-11-19 18:13:08.590347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.612 [2024-11-19 18:13:08.592432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.612 [2024-11-19 18:13:08.592658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.612 [2024-11-19 18:13:08.592660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.874 [2024-11-19 18:13:09.327205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.874 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.136 [2024-11-19 18:13:09.352839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.136 NULL1 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.136 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1917219 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.137 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.398 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.398 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:08.398 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.398 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.398 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.970 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.970 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:08.970 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.970 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.970 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.230 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.230 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:09.230 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.230 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.230 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.490 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.490 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:09.490 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.490 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.490 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.751 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.751 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:09.751 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.751 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.751 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.012 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.012 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:10.012 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.012 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.012 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.585 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.585 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:10.585 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.585 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.585 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.846 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.846 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:10.846 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.846 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.846 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.107 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.107 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:11.107 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.107 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.107 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.369 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.369 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:11.369 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.369 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.369 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.630 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.630 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:11.630 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.630 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.630 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.202 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.202 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:12.202 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.202 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.202 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.463 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.463 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:12.463 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.463 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.463 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.724 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.724 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:12.724 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.724 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.724 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.984 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.984 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:12.984 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.984 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.984 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.245 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.245 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:13.245 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.245 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.245 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.816 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.816 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:13.816 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.816 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.816 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.077 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.077 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:14.077 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.077 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.077 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.337 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.337 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:14.337 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.337 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.337 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.597 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.597 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:14.597 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.597 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.597 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.858 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.858 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:14.858 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.858 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.858 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.428 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.428 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:15.428 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.428 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.428 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.688 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.688 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:15.688 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.688 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.688 18:13:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.949 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.949 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:15.949 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.949 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.949 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.210 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.210 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:16.210 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.210 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.210 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.781 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.781 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:16.781 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.781 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.781 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.041 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.041 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:17.041 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.041 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.041 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.303 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.304 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:17.304 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.304 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.304 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.564 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.564 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:17.564 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.564 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.564 18:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:17.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.085 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1917219 00:14:18.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1917219) - No such process 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1917219 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:18.346 rmmod nvme_tcp 00:14:18.346 rmmod nvme_fabrics 00:14:18.346 rmmod nvme_keyring 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1917114 ']' 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1917114 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1917114 ']' 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1917114 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917114 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917114' 00:14:18.346 killing process with pid 1917114 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1917114 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1917114 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:18.346 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.607 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:18.607 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.607 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.607 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.524 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:20.524 00:14:20.524 real 0m21.237s 00:14:20.524 user 0m42.317s 00:14:20.524 sys 0m9.261s 00:14:20.524 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.524 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.524 ************************************ 00:14:20.524 END TEST nvmf_connect_stress 00:14:20.524 ************************************ 00:14:20.524 18:13:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:20.524 18:13:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:20.524 18:13:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.524 18:13:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.524 ************************************ 00:14:20.524 START TEST nvmf_fused_ordering 00:14:20.524 ************************************ 00:14:20.524 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:20.786 * Looking for test storage... 00:14:20.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.786 --rc genhtml_branch_coverage=1 00:14:20.786 --rc genhtml_function_coverage=1 00:14:20.786 --rc genhtml_legend=1 00:14:20.786 --rc geninfo_all_blocks=1 00:14:20.786 --rc geninfo_unexecuted_blocks=1 00:14:20.786 00:14:20.786 ' 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.786 --rc genhtml_branch_coverage=1 00:14:20.786 --rc genhtml_function_coverage=1 00:14:20.786 --rc genhtml_legend=1 00:14:20.786 --rc geninfo_all_blocks=1 00:14:20.786 --rc geninfo_unexecuted_blocks=1 00:14:20.786 00:14:20.786 ' 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.786 --rc genhtml_branch_coverage=1 00:14:20.786 --rc genhtml_function_coverage=1 00:14:20.786 --rc genhtml_legend=1 00:14:20.786 --rc geninfo_all_blocks=1 00:14:20.786 --rc geninfo_unexecuted_blocks=1 00:14:20.786 00:14:20.786 ' 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:20.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.786 --rc genhtml_branch_coverage=1 00:14:20.786 --rc genhtml_function_coverage=1 00:14:20.786 --rc genhtml_legend=1 00:14:20.786 --rc geninfo_all_blocks=1 00:14:20.786 --rc geninfo_unexecuted_blocks=1 00:14:20.786 00:14:20.786 ' 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.786 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:20.787 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:28.940 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:28.941 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:28.941 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:28.941 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:28.941 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:28.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:14:28.941 00:14:28.941 --- 10.0.0.2 ping statistics --- 00:14:28.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.941 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:14:28.941 00:14:28.941 --- 10.0.0.1 ping statistics --- 00:14:28.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.941 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1923510 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1923510 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1923510 ']' 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.941 18:13:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:28.941 [2024-11-19 18:13:29.710034] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:14:28.941 [2024-11-19 18:13:29.710096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.941 [2024-11-19 18:13:29.809331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.941 [2024-11-19 18:13:29.859586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.942 [2024-11-19 18:13:29.859638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.942 [2024-11-19 18:13:29.859648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.942 [2024-11-19 18:13:29.859655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.942 [2024-11-19 18:13:29.859662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.942 [2024-11-19 18:13:29.860484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.203 [2024-11-19 18:13:30.573764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.203 [2024-11-19 18:13:30.598062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.203 NULL1 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.203 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:29.203 [2024-11-19 18:13:30.668061] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:14:29.203 [2024-11-19 18:13:30.668105] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923657 ] 00:14:29.775 Attached to nqn.2016-06.io.spdk:cnode1 00:14:29.775 Namespace ID: 1 size: 1GB 00:14:29.775 fused_ordering(0) 00:14:29.775 fused_ordering(1) 00:14:29.775 fused_ordering(2) 00:14:29.775 fused_ordering(3) 00:14:29.775 fused_ordering(4) 00:14:29.775 fused_ordering(5) 00:14:29.775 fused_ordering(6) 00:14:29.775 fused_ordering(7) 00:14:29.775 fused_ordering(8) 00:14:29.775 fused_ordering(9) 00:14:29.775 fused_ordering(10) 00:14:29.775 fused_ordering(11) 00:14:29.775 fused_ordering(12) 00:14:29.775 fused_ordering(13) 00:14:29.775 fused_ordering(14) 00:14:29.775 fused_ordering(15) 00:14:29.775 fused_ordering(16) 00:14:29.775 fused_ordering(17) 00:14:29.775 fused_ordering(18) 00:14:29.775 fused_ordering(19) 00:14:29.775 fused_ordering(20) 00:14:29.775 fused_ordering(21) 00:14:29.775 fused_ordering(22) 00:14:29.775 fused_ordering(23) 00:14:29.775 fused_ordering(24) 00:14:29.775 fused_ordering(25) 00:14:29.775 fused_ordering(26) 00:14:29.775 fused_ordering(27) 00:14:29.775 fused_ordering(28) 00:14:29.775 fused_ordering(29) 00:14:29.775 fused_ordering(30) 00:14:29.775 fused_ordering(31) 00:14:29.775 fused_ordering(32) 00:14:29.775 fused_ordering(33) 00:14:29.775 fused_ordering(34) 00:14:29.775 fused_ordering(35) 00:14:29.775 fused_ordering(36) 00:14:29.775 fused_ordering(37) 00:14:29.775 fused_ordering(38) 00:14:29.775 fused_ordering(39) 00:14:29.775 fused_ordering(40) 00:14:29.775 fused_ordering(41) 00:14:29.776 fused_ordering(42) 00:14:29.776 fused_ordering(43) 00:14:29.776 fused_ordering(44) 00:14:29.776 fused_ordering(45) 00:14:29.776 fused_ordering(46) 00:14:29.776 fused_ordering(47) 00:14:29.776 fused_ordering(48) 00:14:29.776 fused_ordering(49) 00:14:29.776 fused_ordering(50) 00:14:29.776 fused_ordering(51) 00:14:29.776 fused_ordering(52) 00:14:29.776 fused_ordering(53) 00:14:29.776 fused_ordering(54) 00:14:29.776 fused_ordering(55) 00:14:29.776 fused_ordering(56) 00:14:29.776 fused_ordering(57) 00:14:29.776 fused_ordering(58) 00:14:29.776 fused_ordering(59) 00:14:29.776 fused_ordering(60) 00:14:29.776 fused_ordering(61) 00:14:29.776 fused_ordering(62) 00:14:29.776 fused_ordering(63) 00:14:29.776 fused_ordering(64) 00:14:29.776 fused_ordering(65) 00:14:29.776 fused_ordering(66) 00:14:29.776 fused_ordering(67) 00:14:29.776 fused_ordering(68) 00:14:29.776 fused_ordering(69) 00:14:29.776 fused_ordering(70) 00:14:29.776 fused_ordering(71) 00:14:29.776 fused_ordering(72) 00:14:29.776 fused_ordering(73) 00:14:29.776 fused_ordering(74) 00:14:29.776 fused_ordering(75) 00:14:29.776 fused_ordering(76) 00:14:29.776 fused_ordering(77) 00:14:29.776 fused_ordering(78) 00:14:29.776 fused_ordering(79) 00:14:29.776 fused_ordering(80) 00:14:29.776 fused_ordering(81) 00:14:29.776 fused_ordering(82) 00:14:29.776 fused_ordering(83) 00:14:29.776 fused_ordering(84) 00:14:29.776 fused_ordering(85) 00:14:29.776 fused_ordering(86) 00:14:29.776 fused_ordering(87) 00:14:29.776 fused_ordering(88) 00:14:29.776 fused_ordering(89) 00:14:29.776 fused_ordering(90) 00:14:29.776 fused_ordering(91) 00:14:29.776 fused_ordering(92) 00:14:29.776 fused_ordering(93) 00:14:29.776 fused_ordering(94) 00:14:29.776 fused_ordering(95) 00:14:29.776 fused_ordering(96) 00:14:29.776 fused_ordering(97) 00:14:29.776 fused_ordering(98) 00:14:29.776 fused_ordering(99) 00:14:29.776 fused_ordering(100) 00:14:29.776 fused_ordering(101) 00:14:29.776 fused_ordering(102) 00:14:29.776 fused_ordering(103) 00:14:29.776 fused_ordering(104) 00:14:29.776 fused_ordering(105) 00:14:29.776 fused_ordering(106) 00:14:29.776 fused_ordering(107) 00:14:29.776 fused_ordering(108) 00:14:29.776 fused_ordering(109) 00:14:29.776 fused_ordering(110) 00:14:29.776 fused_ordering(111) 00:14:29.776 fused_ordering(112) 00:14:29.776 fused_ordering(113) 00:14:29.776 fused_ordering(114) 00:14:29.776 fused_ordering(115) 00:14:29.776 fused_ordering(116) 00:14:29.776 fused_ordering(117) 00:14:29.776 fused_ordering(118) 00:14:29.776 fused_ordering(119) 00:14:29.776 fused_ordering(120) 00:14:29.776 fused_ordering(121) 00:14:29.776 fused_ordering(122) 00:14:29.776 fused_ordering(123) 00:14:29.776 fused_ordering(124) 00:14:29.776 fused_ordering(125) 00:14:29.776 fused_ordering(126) 00:14:29.776 fused_ordering(127) 00:14:29.776 fused_ordering(128) 00:14:29.776 fused_ordering(129) 00:14:29.776 fused_ordering(130) 00:14:29.776 fused_ordering(131) 00:14:29.776 fused_ordering(132) 00:14:29.776 fused_ordering(133) 00:14:29.776 fused_ordering(134) 00:14:29.776 fused_ordering(135) 00:14:29.776 fused_ordering(136) 00:14:29.776 fused_ordering(137) 00:14:29.776 fused_ordering(138) 00:14:29.776 fused_ordering(139) 00:14:29.776 fused_ordering(140) 00:14:29.776 fused_ordering(141) 00:14:29.776 fused_ordering(142) 00:14:29.776 fused_ordering(143) 00:14:29.776 fused_ordering(144) 00:14:29.776 fused_ordering(145) 00:14:29.776 fused_ordering(146) 00:14:29.776 fused_ordering(147) 00:14:29.776 fused_ordering(148) 00:14:29.776 fused_ordering(149) 00:14:29.776 fused_ordering(150) 00:14:29.776 fused_ordering(151) 00:14:29.776 fused_ordering(152) 00:14:29.776 fused_ordering(153) 00:14:29.776 fused_ordering(154) 00:14:29.776 fused_ordering(155) 00:14:29.776 fused_ordering(156) 00:14:29.776 fused_ordering(157) 00:14:29.776 fused_ordering(158) 00:14:29.776 fused_ordering(159) 00:14:29.776 fused_ordering(160) 00:14:29.776 fused_ordering(161) 00:14:29.776 fused_ordering(162) 00:14:29.776 fused_ordering(163) 00:14:29.776 fused_ordering(164) 00:14:29.776 fused_ordering(165) 00:14:29.776 fused_ordering(166) 00:14:29.776 fused_ordering(167) 00:14:29.776 fused_ordering(168) 00:14:29.776 fused_ordering(169) 00:14:29.776 fused_ordering(170) 00:14:29.776 fused_ordering(171) 00:14:29.776 fused_ordering(172) 00:14:29.776 fused_ordering(173) 00:14:29.776 fused_ordering(174) 00:14:29.776 fused_ordering(175) 00:14:29.776 fused_ordering(176) 00:14:29.776 fused_ordering(177) 00:14:29.776 fused_ordering(178) 00:14:29.776 fused_ordering(179) 00:14:29.776 fused_ordering(180) 00:14:29.776 fused_ordering(181) 00:14:29.776 fused_ordering(182) 00:14:29.776 fused_ordering(183) 00:14:29.776 fused_ordering(184) 00:14:29.776 fused_ordering(185) 00:14:29.776 fused_ordering(186) 00:14:29.776 fused_ordering(187) 00:14:29.776 fused_ordering(188) 00:14:29.776 fused_ordering(189) 00:14:29.776 fused_ordering(190) 00:14:29.776 fused_ordering(191) 00:14:29.776 fused_ordering(192) 00:14:29.776 fused_ordering(193) 00:14:29.776 fused_ordering(194) 00:14:29.776 fused_ordering(195) 00:14:29.776 fused_ordering(196) 00:14:29.776 fused_ordering(197) 00:14:29.776 fused_ordering(198) 00:14:29.776 fused_ordering(199) 00:14:29.776 fused_ordering(200) 00:14:29.776 fused_ordering(201) 00:14:29.776 fused_ordering(202) 00:14:29.776 fused_ordering(203) 00:14:29.776 fused_ordering(204) 00:14:29.776 fused_ordering(205) 00:14:30.037 fused_ordering(206) 00:14:30.037 fused_ordering(207) 00:14:30.037 fused_ordering(208) 00:14:30.037 fused_ordering(209) 00:14:30.037 fused_ordering(210) 00:14:30.037 fused_ordering(211) 00:14:30.037 fused_ordering(212) 00:14:30.037 fused_ordering(213) 00:14:30.037 fused_ordering(214) 00:14:30.037 fused_ordering(215) 00:14:30.037 fused_ordering(216) 00:14:30.037 fused_ordering(217) 00:14:30.037 fused_ordering(218) 00:14:30.037 fused_ordering(219) 00:14:30.037 fused_ordering(220) 00:14:30.037 fused_ordering(221) 00:14:30.037 fused_ordering(222) 00:14:30.037 fused_ordering(223) 00:14:30.037 fused_ordering(224) 00:14:30.037 fused_ordering(225) 00:14:30.037 fused_ordering(226) 00:14:30.037 fused_ordering(227) 00:14:30.037 fused_ordering(228) 00:14:30.037 fused_ordering(229) 00:14:30.037 fused_ordering(230) 00:14:30.037 fused_ordering(231) 00:14:30.037 fused_ordering(232) 00:14:30.037 fused_ordering(233) 00:14:30.037 fused_ordering(234) 00:14:30.037 fused_ordering(235) 00:14:30.037 fused_ordering(236) 00:14:30.037 fused_ordering(237) 00:14:30.037 fused_ordering(238) 00:14:30.037 fused_ordering(239) 00:14:30.037 fused_ordering(240) 00:14:30.037 fused_ordering(241) 00:14:30.037 fused_ordering(242) 00:14:30.037 fused_ordering(243) 00:14:30.037 fused_ordering(244) 00:14:30.037 fused_ordering(245) 00:14:30.037 fused_ordering(246) 00:14:30.037 fused_ordering(247) 00:14:30.037 fused_ordering(248) 00:14:30.037 fused_ordering(249) 00:14:30.037 fused_ordering(250) 00:14:30.037 fused_ordering(251) 00:14:30.037 fused_ordering(252) 00:14:30.037 fused_ordering(253) 00:14:30.037 fused_ordering(254) 00:14:30.037 fused_ordering(255) 00:14:30.037 fused_ordering(256) 00:14:30.037 fused_ordering(257) 00:14:30.037 fused_ordering(258) 00:14:30.037 fused_ordering(259) 00:14:30.037 fused_ordering(260) 00:14:30.037 fused_ordering(261) 00:14:30.037 fused_ordering(262) 00:14:30.037 fused_ordering(263) 00:14:30.037 fused_ordering(264) 00:14:30.037 fused_ordering(265) 00:14:30.037 fused_ordering(266) 00:14:30.037 fused_ordering(267) 00:14:30.037 fused_ordering(268) 00:14:30.037 fused_ordering(269) 00:14:30.037 fused_ordering(270) 00:14:30.037 fused_ordering(271) 00:14:30.037 fused_ordering(272) 00:14:30.037 fused_ordering(273) 00:14:30.037 fused_ordering(274) 00:14:30.037 fused_ordering(275) 00:14:30.037 fused_ordering(276) 00:14:30.037 fused_ordering(277) 00:14:30.037 fused_ordering(278) 00:14:30.037 fused_ordering(279) 00:14:30.037 fused_ordering(280) 00:14:30.037 fused_ordering(281) 00:14:30.037 fused_ordering(282) 00:14:30.037 fused_ordering(283) 00:14:30.037 fused_ordering(284) 00:14:30.037 fused_ordering(285) 00:14:30.037 fused_ordering(286) 00:14:30.037 fused_ordering(287) 00:14:30.037 fused_ordering(288) 00:14:30.037 fused_ordering(289) 00:14:30.037 fused_ordering(290) 00:14:30.037 fused_ordering(291) 00:14:30.037 fused_ordering(292) 00:14:30.037 fused_ordering(293) 00:14:30.037 fused_ordering(294) 00:14:30.037 fused_ordering(295) 00:14:30.037 fused_ordering(296) 00:14:30.037 fused_ordering(297) 00:14:30.037 fused_ordering(298) 00:14:30.037 fused_ordering(299) 00:14:30.037 fused_ordering(300) 00:14:30.037 fused_ordering(301) 00:14:30.037 fused_ordering(302) 00:14:30.037 fused_ordering(303) 00:14:30.037 fused_ordering(304) 00:14:30.037 fused_ordering(305) 00:14:30.037 fused_ordering(306) 00:14:30.037 fused_ordering(307) 00:14:30.037 fused_ordering(308) 00:14:30.037 fused_ordering(309) 00:14:30.038 fused_ordering(310) 00:14:30.038 fused_ordering(311) 00:14:30.038 fused_ordering(312) 00:14:30.038 fused_ordering(313) 00:14:30.038 fused_ordering(314) 00:14:30.038 fused_ordering(315) 00:14:30.038 fused_ordering(316) 00:14:30.038 fused_ordering(317) 00:14:30.038 fused_ordering(318) 00:14:30.038 fused_ordering(319) 00:14:30.038 fused_ordering(320) 00:14:30.038 fused_ordering(321) 00:14:30.038 fused_ordering(322) 00:14:30.038 fused_ordering(323) 00:14:30.038 fused_ordering(324) 00:14:30.038 fused_ordering(325) 00:14:30.038 fused_ordering(326) 00:14:30.038 fused_ordering(327) 00:14:30.038 fused_ordering(328) 00:14:30.038 fused_ordering(329) 00:14:30.038 fused_ordering(330) 00:14:30.038 fused_ordering(331) 00:14:30.038 fused_ordering(332) 00:14:30.038 fused_ordering(333) 00:14:30.038 fused_ordering(334) 00:14:30.038 fused_ordering(335) 00:14:30.038 fused_ordering(336) 00:14:30.038 fused_ordering(337) 00:14:30.038 fused_ordering(338) 00:14:30.038 fused_ordering(339) 00:14:30.038 fused_ordering(340) 00:14:30.038 fused_ordering(341) 00:14:30.038 fused_ordering(342) 00:14:30.038 fused_ordering(343) 00:14:30.038 fused_ordering(344) 00:14:30.038 fused_ordering(345) 00:14:30.038 fused_ordering(346) 00:14:30.038 fused_ordering(347) 00:14:30.038 fused_ordering(348) 00:14:30.038 fused_ordering(349) 00:14:30.038 fused_ordering(350) 00:14:30.038 fused_ordering(351) 00:14:30.038 fused_ordering(352) 00:14:30.038 fused_ordering(353) 00:14:30.038 fused_ordering(354) 00:14:30.038 fused_ordering(355) 00:14:30.038 fused_ordering(356) 00:14:30.038 fused_ordering(357) 00:14:30.038 fused_ordering(358) 00:14:30.038 fused_ordering(359) 00:14:30.038 fused_ordering(360) 00:14:30.038 fused_ordering(361) 00:14:30.038 fused_ordering(362) 00:14:30.038 fused_ordering(363) 00:14:30.038 fused_ordering(364) 00:14:30.038 fused_ordering(365) 00:14:30.038 fused_ordering(366) 00:14:30.038 fused_ordering(367) 00:14:30.038 fused_ordering(368) 00:14:30.038 fused_ordering(369) 00:14:30.038 fused_ordering(370) 00:14:30.038 fused_ordering(371) 00:14:30.038 fused_ordering(372) 00:14:30.038 fused_ordering(373) 00:14:30.038 fused_ordering(374) 00:14:30.038 fused_ordering(375) 00:14:30.038 fused_ordering(376) 00:14:30.038 fused_ordering(377) 00:14:30.038 fused_ordering(378) 00:14:30.038 fused_ordering(379) 00:14:30.038 fused_ordering(380) 00:14:30.038 fused_ordering(381) 00:14:30.038 fused_ordering(382) 00:14:30.038 fused_ordering(383) 00:14:30.038 fused_ordering(384) 00:14:30.038 fused_ordering(385) 00:14:30.038 fused_ordering(386) 00:14:30.038 fused_ordering(387) 00:14:30.038 fused_ordering(388) 00:14:30.038 fused_ordering(389) 00:14:30.038 fused_ordering(390) 00:14:30.038 fused_ordering(391) 00:14:30.038 fused_ordering(392) 00:14:30.038 fused_ordering(393) 00:14:30.038 fused_ordering(394) 00:14:30.038 fused_ordering(395) 00:14:30.038 fused_ordering(396) 00:14:30.038 fused_ordering(397) 00:14:30.038 fused_ordering(398) 00:14:30.038 fused_ordering(399) 00:14:30.038 fused_ordering(400) 00:14:30.038 fused_ordering(401) 00:14:30.038 fused_ordering(402) 00:14:30.038 fused_ordering(403) 00:14:30.038 fused_ordering(404) 00:14:30.038 fused_ordering(405) 00:14:30.038 fused_ordering(406) 00:14:30.038 fused_ordering(407) 00:14:30.038 fused_ordering(408) 00:14:30.038 fused_ordering(409) 00:14:30.038 fused_ordering(410) 00:14:30.610 fused_ordering(411) 00:14:30.610 fused_ordering(412) 00:14:30.610 fused_ordering(413) 00:14:30.610 fused_ordering(414) 00:14:30.610 fused_ordering(415) 00:14:30.610 fused_ordering(416) 00:14:30.610 fused_ordering(417) 00:14:30.610 fused_ordering(418) 00:14:30.610 fused_ordering(419) 00:14:30.610 fused_ordering(420) 00:14:30.610 fused_ordering(421) 00:14:30.610 fused_ordering(422) 00:14:30.610 fused_ordering(423) 00:14:30.610 fused_ordering(424) 00:14:30.610 fused_ordering(425) 00:14:30.610 fused_ordering(426) 00:14:30.610 fused_ordering(427) 00:14:30.610 fused_ordering(428) 00:14:30.610 fused_ordering(429) 00:14:30.610 fused_ordering(430) 00:14:30.610 fused_ordering(431) 00:14:30.610 fused_ordering(432) 00:14:30.610 fused_ordering(433) 00:14:30.610 fused_ordering(434) 00:14:30.610 fused_ordering(435) 00:14:30.610 fused_ordering(436) 00:14:30.610 fused_ordering(437) 00:14:30.610 fused_ordering(438) 00:14:30.610 fused_ordering(439) 00:14:30.610 fused_ordering(440) 00:14:30.610 fused_ordering(441) 00:14:30.610 fused_ordering(442) 00:14:30.610 fused_ordering(443) 00:14:30.610 fused_ordering(444) 00:14:30.610 fused_ordering(445) 00:14:30.610 fused_ordering(446) 00:14:30.610 fused_ordering(447) 00:14:30.610 fused_ordering(448) 00:14:30.610 fused_ordering(449) 00:14:30.610 fused_ordering(450) 00:14:30.610 fused_ordering(451) 00:14:30.610 fused_ordering(452) 00:14:30.610 fused_ordering(453) 00:14:30.610 fused_ordering(454) 00:14:30.610 fused_ordering(455) 00:14:30.610 fused_ordering(456) 00:14:30.610 fused_ordering(457) 00:14:30.610 fused_ordering(458) 00:14:30.610 fused_ordering(459) 00:14:30.610 fused_ordering(460) 00:14:30.610 fused_ordering(461) 00:14:30.611 fused_ordering(462) 00:14:30.611 fused_ordering(463) 00:14:30.611 fused_ordering(464) 00:14:30.611 fused_ordering(465) 00:14:30.611 fused_ordering(466) 00:14:30.611 fused_ordering(467) 00:14:30.611 fused_ordering(468) 00:14:30.611 fused_ordering(469) 00:14:30.611 fused_ordering(470) 00:14:30.611 fused_ordering(471) 00:14:30.611 fused_ordering(472) 00:14:30.611 fused_ordering(473) 00:14:30.611 fused_ordering(474) 00:14:30.611 fused_ordering(475) 00:14:30.611 fused_ordering(476) 00:14:30.611 fused_ordering(477) 00:14:30.611 fused_ordering(478) 00:14:30.611 fused_ordering(479) 00:14:30.611 fused_ordering(480) 00:14:30.611 fused_ordering(481) 00:14:30.611 fused_ordering(482) 00:14:30.611 fused_ordering(483) 00:14:30.611 fused_ordering(484) 00:14:30.611 fused_ordering(485) 00:14:30.611 fused_ordering(486) 00:14:30.611 fused_ordering(487) 00:14:30.611 fused_ordering(488) 00:14:30.611 fused_ordering(489) 00:14:30.611 fused_ordering(490) 00:14:30.611 fused_ordering(491) 00:14:30.611 fused_ordering(492) 00:14:30.611 fused_ordering(493) 00:14:30.611 fused_ordering(494) 00:14:30.611 fused_ordering(495) 00:14:30.611 fused_ordering(496) 00:14:30.611 fused_ordering(497) 00:14:30.611 fused_ordering(498) 00:14:30.611 fused_ordering(499) 00:14:30.611 fused_ordering(500) 00:14:30.611 fused_ordering(501) 00:14:30.611 fused_ordering(502) 00:14:30.611 fused_ordering(503) 00:14:30.611 fused_ordering(504) 00:14:30.611 fused_ordering(505) 00:14:30.611 fused_ordering(506) 00:14:30.611 fused_ordering(507) 00:14:30.611 fused_ordering(508) 00:14:30.611 fused_ordering(509) 00:14:30.611 fused_ordering(510) 00:14:30.611 fused_ordering(511) 00:14:30.611 fused_ordering(512) 00:14:30.611 fused_ordering(513) 00:14:30.611 fused_ordering(514) 00:14:30.611 fused_ordering(515) 00:14:30.611 fused_ordering(516) 00:14:30.611 fused_ordering(517) 00:14:30.611 fused_ordering(518) 00:14:30.611 fused_ordering(519) 00:14:30.611 fused_ordering(520) 00:14:30.611 fused_ordering(521) 00:14:30.611 fused_ordering(522) 00:14:30.611 fused_ordering(523) 00:14:30.611 fused_ordering(524) 00:14:30.611 fused_ordering(525) 00:14:30.611 fused_ordering(526) 00:14:30.611 fused_ordering(527) 00:14:30.611 fused_ordering(528) 00:14:30.611 fused_ordering(529) 00:14:30.611 fused_ordering(530) 00:14:30.611 fused_ordering(531) 00:14:30.611 fused_ordering(532) 00:14:30.611 fused_ordering(533) 00:14:30.611 fused_ordering(534) 00:14:30.611 fused_ordering(535) 00:14:30.611 fused_ordering(536) 00:14:30.611 fused_ordering(537) 00:14:30.611 fused_ordering(538) 00:14:30.611 fused_ordering(539) 00:14:30.611 fused_ordering(540) 00:14:30.611 fused_ordering(541) 00:14:30.611 fused_ordering(542) 00:14:30.611 fused_ordering(543) 00:14:30.611 fused_ordering(544) 00:14:30.611 fused_ordering(545) 00:14:30.611 fused_ordering(546) 00:14:30.611 fused_ordering(547) 00:14:30.611 fused_ordering(548) 00:14:30.611 fused_ordering(549) 00:14:30.611 fused_ordering(550) 00:14:30.611 fused_ordering(551) 00:14:30.611 fused_ordering(552) 00:14:30.611 fused_ordering(553) 00:14:30.611 fused_ordering(554) 00:14:30.611 fused_ordering(555) 00:14:30.611 fused_ordering(556) 00:14:30.611 fused_ordering(557) 00:14:30.611 fused_ordering(558) 00:14:30.611 fused_ordering(559) 00:14:30.611 fused_ordering(560) 00:14:30.611 fused_ordering(561) 00:14:30.611 fused_ordering(562) 00:14:30.611 fused_ordering(563) 00:14:30.611 fused_ordering(564) 00:14:30.611 fused_ordering(565) 00:14:30.611 fused_ordering(566) 00:14:30.611 fused_ordering(567) 00:14:30.611 fused_ordering(568) 00:14:30.611 fused_ordering(569) 00:14:30.611 fused_ordering(570) 00:14:30.611 fused_ordering(571) 00:14:30.611 fused_ordering(572) 00:14:30.611 fused_ordering(573) 00:14:30.611 fused_ordering(574) 00:14:30.611 fused_ordering(575) 00:14:30.611 fused_ordering(576) 00:14:30.611 fused_ordering(577) 00:14:30.611 fused_ordering(578) 00:14:30.611 fused_ordering(579) 00:14:30.611 fused_ordering(580) 00:14:30.611 fused_ordering(581) 00:14:30.611 fused_ordering(582) 00:14:30.611 fused_ordering(583) 00:14:30.611 fused_ordering(584) 00:14:30.611 fused_ordering(585) 00:14:30.611 fused_ordering(586) 00:14:30.611 fused_ordering(587) 00:14:30.611 fused_ordering(588) 00:14:30.611 fused_ordering(589) 00:14:30.611 fused_ordering(590) 00:14:30.611 fused_ordering(591) 00:14:30.611 fused_ordering(592) 00:14:30.611 fused_ordering(593) 00:14:30.611 fused_ordering(594) 00:14:30.611 fused_ordering(595) 00:14:30.611 fused_ordering(596) 00:14:30.611 fused_ordering(597) 00:14:30.611 fused_ordering(598) 00:14:30.611 fused_ordering(599) 00:14:30.611 fused_ordering(600) 00:14:30.611 fused_ordering(601) 00:14:30.611 fused_ordering(602) 00:14:30.611 fused_ordering(603) 00:14:30.611 fused_ordering(604) 00:14:30.611 fused_ordering(605) 00:14:30.611 fused_ordering(606) 00:14:30.611 fused_ordering(607) 00:14:30.611 fused_ordering(608) 00:14:30.611 fused_ordering(609) 00:14:30.611 fused_ordering(610) 00:14:30.611 fused_ordering(611) 00:14:30.611 fused_ordering(612) 00:14:30.611 fused_ordering(613) 00:14:30.611 fused_ordering(614) 00:14:30.611 fused_ordering(615) 00:14:31.184 fused_ordering(616) 00:14:31.184 fused_ordering(617) 00:14:31.184 fused_ordering(618) 00:14:31.184 fused_ordering(619) 00:14:31.184 fused_ordering(620) 00:14:31.184 fused_ordering(621) 00:14:31.184 fused_ordering(622) 00:14:31.184 fused_ordering(623) 00:14:31.184 fused_ordering(624) 00:14:31.184 fused_ordering(625) 00:14:31.184 fused_ordering(626) 00:14:31.184 fused_ordering(627) 00:14:31.184 fused_ordering(628) 00:14:31.184 fused_ordering(629) 00:14:31.184 fused_ordering(630) 00:14:31.184 fused_ordering(631) 00:14:31.184 fused_ordering(632) 00:14:31.184 fused_ordering(633) 00:14:31.184 fused_ordering(634) 00:14:31.184 fused_ordering(635) 00:14:31.184 fused_ordering(636) 00:14:31.184 fused_ordering(637) 00:14:31.184 fused_ordering(638) 00:14:31.184 fused_ordering(639) 00:14:31.184 fused_ordering(640) 00:14:31.184 fused_ordering(641) 00:14:31.184 fused_ordering(642) 00:14:31.184 fused_ordering(643) 00:14:31.184 fused_ordering(644) 00:14:31.184 fused_ordering(645) 00:14:31.184 fused_ordering(646) 00:14:31.184 fused_ordering(647) 00:14:31.184 fused_ordering(648) 00:14:31.184 fused_ordering(649) 00:14:31.184 fused_ordering(650) 00:14:31.184 fused_ordering(651) 00:14:31.184 fused_ordering(652) 00:14:31.184 fused_ordering(653) 00:14:31.184 fused_ordering(654) 00:14:31.184 fused_ordering(655) 00:14:31.184 fused_ordering(656) 00:14:31.184 fused_ordering(657) 00:14:31.184 fused_ordering(658) 00:14:31.184 fused_ordering(659) 00:14:31.184 fused_ordering(660) 00:14:31.184 fused_ordering(661) 00:14:31.184 fused_ordering(662) 00:14:31.184 fused_ordering(663) 00:14:31.184 fused_ordering(664) 00:14:31.184 fused_ordering(665) 00:14:31.184 fused_ordering(666) 00:14:31.184 fused_ordering(667) 00:14:31.184 fused_ordering(668) 00:14:31.184 fused_ordering(669) 00:14:31.184 fused_ordering(670) 00:14:31.184 fused_ordering(671) 00:14:31.184 fused_ordering(672) 00:14:31.184 fused_ordering(673) 00:14:31.184 fused_ordering(674) 00:14:31.184 fused_ordering(675) 00:14:31.184 fused_ordering(676) 00:14:31.184 fused_ordering(677) 00:14:31.184 fused_ordering(678) 00:14:31.184 fused_ordering(679) 00:14:31.184 fused_ordering(680) 00:14:31.184 fused_ordering(681) 00:14:31.184 fused_ordering(682) 00:14:31.184 fused_ordering(683) 00:14:31.184 fused_ordering(684) 00:14:31.184 fused_ordering(685) 00:14:31.184 fused_ordering(686) 00:14:31.184 fused_ordering(687) 00:14:31.184 fused_ordering(688) 00:14:31.184 fused_ordering(689) 00:14:31.184 fused_ordering(690) 00:14:31.184 fused_ordering(691) 00:14:31.184 fused_ordering(692) 00:14:31.184 fused_ordering(693) 00:14:31.184 fused_ordering(694) 00:14:31.184 fused_ordering(695) 00:14:31.184 fused_ordering(696) 00:14:31.184 fused_ordering(697) 00:14:31.184 fused_ordering(698) 00:14:31.184 fused_ordering(699) 00:14:31.184 fused_ordering(700) 00:14:31.184 fused_ordering(701) 00:14:31.184 fused_ordering(702) 00:14:31.184 fused_ordering(703) 00:14:31.184 fused_ordering(704) 00:14:31.184 fused_ordering(705) 00:14:31.184 fused_ordering(706) 00:14:31.184 fused_ordering(707) 00:14:31.184 fused_ordering(708) 00:14:31.184 fused_ordering(709) 00:14:31.184 fused_ordering(710) 00:14:31.184 fused_ordering(711) 00:14:31.184 fused_ordering(712) 00:14:31.184 fused_ordering(713) 00:14:31.184 fused_ordering(714) 00:14:31.184 fused_ordering(715) 00:14:31.184 fused_ordering(716) 00:14:31.184 fused_ordering(717) 00:14:31.184 fused_ordering(718) 00:14:31.184 fused_ordering(719) 00:14:31.184 fused_ordering(720) 00:14:31.185 fused_ordering(721) 00:14:31.185 fused_ordering(722) 00:14:31.185 fused_ordering(723) 00:14:31.185 fused_ordering(724) 00:14:31.185 fused_ordering(725) 00:14:31.185 fused_ordering(726) 00:14:31.185 fused_ordering(727) 00:14:31.185 fused_ordering(728) 00:14:31.185 fused_ordering(729) 00:14:31.185 fused_ordering(730) 00:14:31.185 fused_ordering(731) 00:14:31.185 fused_ordering(732) 00:14:31.185 fused_ordering(733) 00:14:31.185 fused_ordering(734) 00:14:31.185 fused_ordering(735) 00:14:31.185 fused_ordering(736) 00:14:31.185 fused_ordering(737) 00:14:31.185 fused_ordering(738) 00:14:31.185 fused_ordering(739) 00:14:31.185 fused_ordering(740) 00:14:31.185 fused_ordering(741) 00:14:31.185 fused_ordering(742) 00:14:31.185 fused_ordering(743) 00:14:31.185 fused_ordering(744) 00:14:31.185 fused_ordering(745) 00:14:31.185 fused_ordering(746) 00:14:31.185 fused_ordering(747) 00:14:31.185 fused_ordering(748) 00:14:31.185 fused_ordering(749) 00:14:31.185 fused_ordering(750) 00:14:31.185 fused_ordering(751) 00:14:31.185 fused_ordering(752) 00:14:31.185 fused_ordering(753) 00:14:31.185 fused_ordering(754) 00:14:31.185 fused_ordering(755) 00:14:31.185 fused_ordering(756) 00:14:31.185 fused_ordering(757) 00:14:31.185 fused_ordering(758) 00:14:31.185 fused_ordering(759) 00:14:31.185 fused_ordering(760) 00:14:31.185 fused_ordering(761) 00:14:31.185 fused_ordering(762) 00:14:31.185 fused_ordering(763) 00:14:31.185 fused_ordering(764) 00:14:31.185 fused_ordering(765) 00:14:31.185 fused_ordering(766) 00:14:31.185 fused_ordering(767) 00:14:31.185 fused_ordering(768) 00:14:31.185 fused_ordering(769) 00:14:31.185 fused_ordering(770) 00:14:31.185 fused_ordering(771) 00:14:31.185 fused_ordering(772) 00:14:31.185 fused_ordering(773) 00:14:31.185 fused_ordering(774) 00:14:31.185 fused_ordering(775) 00:14:31.185 fused_ordering(776) 00:14:31.185 fused_ordering(777) 00:14:31.185 fused_ordering(778) 00:14:31.185 fused_ordering(779) 00:14:31.185 fused_ordering(780) 00:14:31.185 fused_ordering(781) 00:14:31.185 fused_ordering(782) 00:14:31.185 fused_ordering(783) 00:14:31.185 fused_ordering(784) 00:14:31.185 fused_ordering(785) 00:14:31.185 fused_ordering(786) 00:14:31.185 fused_ordering(787) 00:14:31.185 fused_ordering(788) 00:14:31.185 fused_ordering(789) 00:14:31.185 fused_ordering(790) 00:14:31.185 fused_ordering(791) 00:14:31.185 fused_ordering(792) 00:14:31.185 fused_ordering(793) 00:14:31.185 fused_ordering(794) 00:14:31.185 fused_ordering(795) 00:14:31.185 fused_ordering(796) 00:14:31.185 fused_ordering(797) 00:14:31.185 fused_ordering(798) 00:14:31.185 fused_ordering(799) 00:14:31.185 fused_ordering(800) 00:14:31.185 fused_ordering(801) 00:14:31.185 fused_ordering(802) 00:14:31.185 fused_ordering(803) 00:14:31.185 fused_ordering(804) 00:14:31.185 fused_ordering(805) 00:14:31.185 fused_ordering(806) 00:14:31.185 fused_ordering(807) 00:14:31.185 fused_ordering(808) 00:14:31.185 fused_ordering(809) 00:14:31.185 fused_ordering(810) 00:14:31.185 fused_ordering(811) 00:14:31.185 fused_ordering(812) 00:14:31.185 fused_ordering(813) 00:14:31.185 fused_ordering(814) 00:14:31.185 fused_ordering(815) 00:14:31.185 fused_ordering(816) 00:14:31.185 fused_ordering(817) 00:14:31.185 fused_ordering(818) 00:14:31.185 fused_ordering(819) 00:14:31.185 fused_ordering(820) 00:14:31.758 fused_ordering(821) 00:14:31.758 fused_ordering(822) 00:14:31.758 fused_ordering(823) 00:14:31.758 fused_ordering(824) 00:14:31.758 fused_ordering(825) 00:14:31.758 fused_ordering(826) 00:14:31.758 fused_ordering(827) 00:14:31.758 fused_ordering(828) 00:14:31.758 fused_ordering(829) 00:14:31.758 fused_ordering(830) 00:14:31.758 fused_ordering(831) 00:14:31.758 fused_ordering(832) 00:14:31.758 fused_ordering(833) 00:14:31.758 fused_ordering(834) 00:14:31.758 fused_ordering(835) 00:14:31.758 fused_ordering(836) 00:14:31.758 fused_ordering(837) 00:14:31.758 fused_ordering(838) 00:14:31.758 fused_ordering(839) 00:14:31.758 fused_ordering(840) 00:14:31.758 fused_ordering(841) 00:14:31.758 fused_ordering(842) 00:14:31.758 fused_ordering(843) 00:14:31.758 fused_ordering(844) 00:14:31.758 fused_ordering(845) 00:14:31.758 fused_ordering(846) 00:14:31.758 fused_ordering(847) 00:14:31.758 fused_ordering(848) 00:14:31.758 fused_ordering(849) 00:14:31.758 fused_ordering(850) 00:14:31.758 fused_ordering(851) 00:14:31.758 fused_ordering(852) 00:14:31.758 fused_ordering(853) 00:14:31.758 fused_ordering(854) 00:14:31.758 fused_ordering(855) 00:14:31.758 fused_ordering(856) 00:14:31.758 fused_ordering(857) 00:14:31.758 fused_ordering(858) 00:14:31.758 fused_ordering(859) 00:14:31.758 fused_ordering(860) 00:14:31.758 fused_ordering(861) 00:14:31.758 fused_ordering(862) 00:14:31.758 fused_ordering(863) 00:14:31.758 fused_ordering(864) 00:14:31.758 fused_ordering(865) 00:14:31.758 fused_ordering(866) 00:14:31.758 fused_ordering(867) 00:14:31.758 fused_ordering(868) 00:14:31.758 fused_ordering(869) 00:14:31.758 fused_ordering(870) 00:14:31.758 fused_ordering(871) 00:14:31.758 fused_ordering(872) 00:14:31.758 fused_ordering(873) 00:14:31.758 fused_ordering(874) 00:14:31.758 fused_ordering(875) 00:14:31.758 fused_ordering(876) 00:14:31.758 fused_ordering(877) 00:14:31.758 fused_ordering(878) 00:14:31.758 fused_ordering(879) 00:14:31.758 fused_ordering(880) 00:14:31.758 fused_ordering(881) 00:14:31.758 fused_ordering(882) 00:14:31.758 fused_ordering(883) 00:14:31.758 fused_ordering(884) 00:14:31.758 fused_ordering(885) 00:14:31.758 fused_ordering(886) 00:14:31.758 fused_ordering(887) 00:14:31.758 fused_ordering(888) 00:14:31.758 fused_ordering(889) 00:14:31.758 fused_ordering(890) 00:14:31.758 fused_ordering(891) 00:14:31.758 fused_ordering(892) 00:14:31.758 fused_ordering(893) 00:14:31.758 fused_ordering(894) 00:14:31.758 fused_ordering(895) 00:14:31.758 fused_ordering(896) 00:14:31.758 fused_ordering(897) 00:14:31.758 fused_ordering(898) 00:14:31.758 fused_ordering(899) 00:14:31.758 fused_ordering(900) 00:14:31.758 fused_ordering(901) 00:14:31.758 fused_ordering(902) 00:14:31.758 fused_ordering(903) 00:14:31.758 fused_ordering(904) 00:14:31.758 fused_ordering(905) 00:14:31.758 fused_ordering(906) 00:14:31.758 fused_ordering(907) 00:14:31.758 fused_ordering(908) 00:14:31.758 fused_ordering(909) 00:14:31.758 fused_ordering(910) 00:14:31.758 fused_ordering(911) 00:14:31.758 fused_ordering(912) 00:14:31.758 fused_ordering(913) 00:14:31.758 fused_ordering(914) 00:14:31.758 fused_ordering(915) 00:14:31.758 fused_ordering(916) 00:14:31.758 fused_ordering(917) 00:14:31.758 fused_ordering(918) 00:14:31.758 fused_ordering(919) 00:14:31.758 fused_ordering(920) 00:14:31.758 fused_ordering(921) 00:14:31.758 fused_ordering(922) 00:14:31.758 fused_ordering(923) 00:14:31.758 fused_ordering(924) 00:14:31.758 fused_ordering(925) 00:14:31.758 fused_ordering(926) 00:14:31.758 fused_ordering(927) 00:14:31.758 fused_ordering(928) 00:14:31.758 fused_ordering(929) 00:14:31.758 fused_ordering(930) 00:14:31.758 fused_ordering(931) 00:14:31.758 fused_ordering(932) 00:14:31.758 fused_ordering(933) 00:14:31.758 fused_ordering(934) 00:14:31.758 fused_ordering(935) 00:14:31.758 fused_ordering(936) 00:14:31.758 fused_ordering(937) 00:14:31.758 fused_ordering(938) 00:14:31.758 fused_ordering(939) 00:14:31.758 fused_ordering(940) 00:14:31.758 fused_ordering(941) 00:14:31.758 fused_ordering(942) 00:14:31.758 fused_ordering(943) 00:14:31.758 fused_ordering(944) 00:14:31.758 fused_ordering(945) 00:14:31.758 fused_ordering(946) 00:14:31.758 fused_ordering(947) 00:14:31.758 fused_ordering(948) 00:14:31.758 fused_ordering(949) 00:14:31.758 fused_ordering(950) 00:14:31.758 fused_ordering(951) 00:14:31.758 fused_ordering(952) 00:14:31.758 fused_ordering(953) 00:14:31.758 fused_ordering(954) 00:14:31.758 fused_ordering(955) 00:14:31.758 fused_ordering(956) 00:14:31.758 fused_ordering(957) 00:14:31.758 fused_ordering(958) 00:14:31.758 fused_ordering(959) 00:14:31.758 fused_ordering(960) 00:14:31.758 fused_ordering(961) 00:14:31.758 fused_ordering(962) 00:14:31.758 fused_ordering(963) 00:14:31.758 fused_ordering(964) 00:14:31.758 fused_ordering(965) 00:14:31.758 fused_ordering(966) 00:14:31.758 fused_ordering(967) 00:14:31.758 fused_ordering(968) 00:14:31.758 fused_ordering(969) 00:14:31.758 fused_ordering(970) 00:14:31.758 fused_ordering(971) 00:14:31.758 fused_ordering(972) 00:14:31.758 fused_ordering(973) 00:14:31.758 fused_ordering(974) 00:14:31.758 fused_ordering(975) 00:14:31.758 fused_ordering(976) 00:14:31.758 fused_ordering(977) 00:14:31.758 fused_ordering(978) 00:14:31.758 fused_ordering(979) 00:14:31.758 fused_ordering(980) 00:14:31.758 fused_ordering(981) 00:14:31.758 fused_ordering(982) 00:14:31.758 fused_ordering(983) 00:14:31.758 fused_ordering(984) 00:14:31.758 fused_ordering(985) 00:14:31.758 fused_ordering(986) 00:14:31.758 fused_ordering(987) 00:14:31.758 fused_ordering(988) 00:14:31.758 fused_ordering(989) 00:14:31.758 fused_ordering(990) 00:14:31.758 fused_ordering(991) 00:14:31.758 fused_ordering(992) 00:14:31.758 fused_ordering(993) 00:14:31.758 fused_ordering(994) 00:14:31.758 fused_ordering(995) 00:14:31.758 fused_ordering(996) 00:14:31.758 fused_ordering(997) 00:14:31.758 fused_ordering(998) 00:14:31.758 fused_ordering(999) 00:14:31.758 fused_ordering(1000) 00:14:31.758 fused_ordering(1001) 00:14:31.758 fused_ordering(1002) 00:14:31.758 fused_ordering(1003) 00:14:31.758 fused_ordering(1004) 00:14:31.758 fused_ordering(1005) 00:14:31.758 fused_ordering(1006) 00:14:31.758 fused_ordering(1007) 00:14:31.758 fused_ordering(1008) 00:14:31.758 fused_ordering(1009) 00:14:31.758 fused_ordering(1010) 00:14:31.758 fused_ordering(1011) 00:14:31.758 fused_ordering(1012) 00:14:31.758 fused_ordering(1013) 00:14:31.758 fused_ordering(1014) 00:14:31.758 fused_ordering(1015) 00:14:31.758 fused_ordering(1016) 00:14:31.758 fused_ordering(1017) 00:14:31.758 fused_ordering(1018) 00:14:31.758 fused_ordering(1019) 00:14:31.758 fused_ordering(1020) 00:14:31.758 fused_ordering(1021) 00:14:31.758 fused_ordering(1022) 00:14:31.758 fused_ordering(1023) 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:31.758 rmmod nvme_tcp 00:14:31.758 rmmod nvme_fabrics 00:14:31.758 rmmod nvme_keyring 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1923510 ']' 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1923510 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1923510 ']' 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1923510 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1923510 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1923510' 00:14:31.758 killing process with pid 1923510 00:14:31.758 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1923510 00:14:31.759 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1923510 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.019 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:34.567 00:14:34.567 real 0m13.439s 00:14:34.567 user 0m7.087s 00:14:34.567 sys 0m7.217s 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:34.567 ************************************ 00:14:34.567 END TEST nvmf_fused_ordering 00:14:34.567 ************************************ 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:34.567 ************************************ 00:14:34.567 START TEST nvmf_ns_masking 00:14:34.567 ************************************ 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:34.567 * Looking for test storage... 00:14:34.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:34.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.567 --rc genhtml_branch_coverage=1 00:14:34.567 --rc genhtml_function_coverage=1 00:14:34.567 --rc genhtml_legend=1 00:14:34.567 --rc geninfo_all_blocks=1 00:14:34.567 --rc geninfo_unexecuted_blocks=1 00:14:34.567 00:14:34.567 ' 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:34.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.567 --rc genhtml_branch_coverage=1 00:14:34.567 --rc genhtml_function_coverage=1 00:14:34.567 --rc genhtml_legend=1 00:14:34.567 --rc geninfo_all_blocks=1 00:14:34.567 --rc geninfo_unexecuted_blocks=1 00:14:34.567 00:14:34.567 ' 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:34.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.567 --rc genhtml_branch_coverage=1 00:14:34.567 --rc genhtml_function_coverage=1 00:14:34.567 --rc genhtml_legend=1 00:14:34.567 --rc geninfo_all_blocks=1 00:14:34.567 --rc geninfo_unexecuted_blocks=1 00:14:34.567 00:14:34.567 ' 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:34.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.567 --rc genhtml_branch_coverage=1 00:14:34.567 --rc genhtml_function_coverage=1 00:14:34.567 --rc genhtml_legend=1 00:14:34.567 --rc geninfo_all_blocks=1 00:14:34.567 --rc geninfo_unexecuted_blocks=1 00:14:34.567 00:14:34.567 ' 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.567 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1ccf7331-bf6d-4cd3-a030-75ee9b6cd2d0 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0af11013-0379-49f0-bea2-3413fdd80619 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=54893194-4269-45f9-84d7-67e656154bf4 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:34.568 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:42.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.718 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:42.719 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:42.719 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:42.719 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.719 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:42.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:14:42.719 00:14:42.719 --- 10.0.0.2 ping statistics --- 00:14:42.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.719 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:14:42.719 00:14:42.719 --- 10.0.0.1 ping statistics --- 00:14:42.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.719 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1928355 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1928355 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1928355 ']' 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.719 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:42.719 [2024-11-19 18:13:43.293408] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:14:42.719 [2024-11-19 18:13:43.293473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.719 [2024-11-19 18:13:43.392613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.719 [2024-11-19 18:13:43.443981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.719 [2024-11-19 18:13:43.444030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.719 [2024-11-19 18:13:43.444039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.719 [2024-11-19 18:13:43.444047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.719 [2024-11-19 18:13:43.444053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.719 [2024-11-19 18:13:43.444805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.719 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.720 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:42.720 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:42.720 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:42.720 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:42.720 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.720 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:42.981 [2024-11-19 18:13:44.326765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.981 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:42.981 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:42.981 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:43.243 Malloc1 00:14:43.243 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:43.504 Malloc2 00:14:43.504 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:43.767 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:43.767 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.028 [2024-11-19 18:13:45.362608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.028 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:44.028 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 54893194-4269-45f9-84d7-67e656154bf4 -a 10.0.0.2 -s 4420 -i 4 00:14:44.290 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:44.290 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:44.290 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.290 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:44.290 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.381 [ 0]:0x1 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c7dbee262b34d3aaa564297f5f2e043 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c7dbee262b34d3aaa564297f5f2e043 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.381 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:46.653 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:46.653 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.653 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:46.653 [ 0]:0x1 00:14:46.653 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:46.653 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c7dbee262b34d3aaa564297f5f2e043 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c7dbee262b34d3aaa564297f5f2e043 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:46.653 [ 1]:0x2 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5b21887cd7424a6bb7be186cf6188047 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5b21887cd7424a6bb7be186cf6188047 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:46.653 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.944 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.944 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:47.243 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:47.243 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 54893194-4269-45f9-84d7-67e656154bf4 -a 10.0.0.2 -s 4420 -i 4 00:14:47.506 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:47.506 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:47.506 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.506 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:47.506 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:47.506 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.422 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.683 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:49.683 [ 0]:0x2 00:14:49.683 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:49.683 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.683 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5b21887cd7424a6bb7be186cf6188047 00:14:49.683 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5b21887cd7424a6bb7be186cf6188047 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.683 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:49.945 [ 0]:0x1 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c7dbee262b34d3aaa564297f5f2e043 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c7dbee262b34d3aaa564297f5f2e043 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:49.945 [ 1]:0x2 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5b21887cd7424a6bb7be186cf6188047 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5b21887cd7424a6bb7be186cf6188047 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.945 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:50.207 [ 0]:0x2 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5b21887cd7424a6bb7be186cf6188047 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5b21887cd7424a6bb7be186cf6188047 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.207 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:50.468 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:50.468 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 54893194-4269-45f9-84d7-67e656154bf4 -a 10.0.0.2 -s 4420 -i 4 00:14:50.730 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:50.730 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:50.730 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.730 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:50.730 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:50.730 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:52.646 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:52.646 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:52.646 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.647 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:52.647 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.647 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:52.647 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:52.647 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:52.908 [ 0]:0x1 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c7dbee262b34d3aaa564297f5f2e043 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c7dbee262b34d3aaa564297f5f2e043 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:52.908 [ 1]:0x2 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:52.908 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5b21887cd7424a6bb7be186cf6188047 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5b21887cd7424a6bb7be186cf6188047 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.170 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:53.433 [ 0]:0x2 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5b21887cd7424a6bb7be186cf6188047 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5b21887cd7424a6bb7be186cf6188047 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:53.433 [2024-11-19 18:13:54.856721] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:53.433 request: 00:14:53.433 { 00:14:53.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.433 "nsid": 2, 00:14:53.433 "host": "nqn.2016-06.io.spdk:host1", 00:14:53.433 "method": "nvmf_ns_remove_host", 00:14:53.433 "req_id": 1 00:14:53.433 } 00:14:53.433 Got JSON-RPC error response 00:14:53.433 response: 00:14:53.433 { 00:14:53.433 "code": -32602, 00:14:53.433 "message": "Invalid parameters" 00:14:53.433 } 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.433 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:53.696 [ 0]:0x2 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5b21887cd7424a6bb7be186cf6188047 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5b21887cd7424a6bb7be186cf6188047 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:53.696 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1930720 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1930720 /var/tmp/host.sock 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1930720 ']' 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:53.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.696 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:53.696 [2024-11-19 18:13:55.073682] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:14:53.696 [2024-11-19 18:13:55.073733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930720 ] 00:14:53.696 [2024-11-19 18:13:55.161262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.958 [2024-11-19 18:13:55.197083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.531 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.531 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:54.531 18:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.792 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:54.792 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1ccf7331-bf6d-4cd3-a030-75ee9b6cd2d0 00:14:54.792 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:54.792 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1CCF7331BF6D4CD3A03075EE9B6CD2D0 -i 00:14:55.053 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0af11013-0379-49f0-bea2-3413fdd80619 00:14:55.053 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:55.053 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0AF11013037949F0BEA23413FDD80619 -i 00:14:55.313 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:55.574 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:55.574 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:55.574 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:55.835 nvme0n1 00:14:55.835 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:55.835 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:56.095 nvme1n2 00:14:56.095 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:56.095 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:56.095 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:56.095 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:56.095 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:56.356 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:56.356 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:56.356 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:56.356 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:56.616 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1ccf7331-bf6d-4cd3-a030-75ee9b6cd2d0 == \1\c\c\f\7\3\3\1\-\b\f\6\d\-\4\c\d\3\-\a\0\3\0\-\7\5\e\e\9\b\6\c\d\2\d\0 ]] 00:14:56.616 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:56.616 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:56.616 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:56.616 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0af11013-0379-49f0-bea2-3413fdd80619 == \0\a\f\1\1\0\1\3\-\0\3\7\9\-\4\9\f\0\-\b\e\a\2\-\3\4\1\3\f\d\d\8\0\6\1\9 ]] 00:14:56.616 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.877 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 1ccf7331-bf6d-4cd3-a030-75ee9b6cd2d0 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 1CCF7331BF6D4CD3A03075EE9B6CD2D0 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 1CCF7331BF6D4CD3A03075EE9B6CD2D0 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 1CCF7331BF6D4CD3A03075EE9B6CD2D0 00:14:57.138 [2024-11-19 18:13:58.570450] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:57.138 [2024-11-19 18:13:58.570475] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:57.138 [2024-11-19 18:13:58.570483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.138 request: 00:14:57.138 { 00:14:57.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.138 "namespace": { 00:14:57.138 "bdev_name": "invalid", 00:14:57.138 "nsid": 1, 00:14:57.138 "nguid": "1CCF7331BF6D4CD3A03075EE9B6CD2D0", 00:14:57.138 "no_auto_visible": false 00:14:57.138 }, 00:14:57.138 "method": "nvmf_subsystem_add_ns", 00:14:57.138 "req_id": 1 00:14:57.138 } 00:14:57.138 Got JSON-RPC error response 00:14:57.138 response: 00:14:57.138 { 00:14:57.138 "code": -32602, 00:14:57.138 "message": "Invalid parameters" 00:14:57.138 } 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:57.138 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.139 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.139 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.139 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 1ccf7331-bf6d-4cd3-a030-75ee9b6cd2d0 00:14:57.400 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:57.400 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1CCF7331BF6D4CD3A03075EE9B6CD2D0 -i 00:14:57.400 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1930720 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1930720 ']' 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1930720 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.946 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1930720 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1930720' 00:14:59.946 killing process with pid 1930720 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1930720 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1930720 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.946 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:59.947 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.947 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.947 rmmod nvme_tcp 00:14:59.947 rmmod nvme_fabrics 00:15:00.207 rmmod nvme_keyring 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1928355 ']' 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1928355 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1928355 ']' 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1928355 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1928355 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1928355' 00:15:00.207 killing process with pid 1928355 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1928355 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1928355 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.207 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:02.758 00:15:02.758 real 0m28.234s 00:15:02.758 user 0m31.939s 00:15:02.758 sys 0m8.336s 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.758 ************************************ 00:15:02.758 END TEST nvmf_ns_masking 00:15:02.758 ************************************ 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:02.758 ************************************ 00:15:02.758 START TEST nvmf_nvme_cli 00:15:02.758 ************************************ 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:02.758 * Looking for test storage... 00:15:02.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.758 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:02.758 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.758 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:02.758 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:02.758 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.758 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:02.758 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.758 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.758 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:02.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.759 --rc genhtml_branch_coverage=1 00:15:02.759 --rc genhtml_function_coverage=1 00:15:02.759 --rc genhtml_legend=1 00:15:02.759 --rc geninfo_all_blocks=1 00:15:02.759 --rc geninfo_unexecuted_blocks=1 00:15:02.759 00:15:02.759 ' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:02.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.759 --rc genhtml_branch_coverage=1 00:15:02.759 --rc genhtml_function_coverage=1 00:15:02.759 --rc genhtml_legend=1 00:15:02.759 --rc geninfo_all_blocks=1 00:15:02.759 --rc geninfo_unexecuted_blocks=1 00:15:02.759 00:15:02.759 ' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:02.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.759 --rc genhtml_branch_coverage=1 00:15:02.759 --rc genhtml_function_coverage=1 00:15:02.759 --rc genhtml_legend=1 00:15:02.759 --rc geninfo_all_blocks=1 00:15:02.759 --rc geninfo_unexecuted_blocks=1 00:15:02.759 00:15:02.759 ' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:02.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.759 --rc genhtml_branch_coverage=1 00:15:02.759 --rc genhtml_function_coverage=1 00:15:02.759 --rc genhtml_legend=1 00:15:02.759 --rc geninfo_all_blocks=1 00:15:02.759 --rc geninfo_unexecuted_blocks=1 00:15:02.759 00:15:02.759 ' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:02.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:02.759 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:10.908 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:10.908 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:10.908 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.908 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:10.909 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:10.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:15:10.909 00:15:10.909 --- 10.0.0.2 ping statistics --- 00:15:10.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.909 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:15:10.909 00:15:10.909 --- 10.0.0.1 ping statistics --- 00:15:10.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.909 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1936319 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1936319 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1936319 ']' 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.909 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:10.909 [2024-11-19 18:14:11.636351] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:15:10.909 [2024-11-19 18:14:11.636414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.909 [2024-11-19 18:14:11.738043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.909 [2024-11-19 18:14:11.792681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.909 [2024-11-19 18:14:11.792738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.909 [2024-11-19 18:14:11.792747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.909 [2024-11-19 18:14:11.792754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.909 [2024-11-19 18:14:11.792761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.909 [2024-11-19 18:14:11.795107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.909 [2024-11-19 18:14:11.795266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.909 [2024-11-19 18:14:11.795623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.909 [2024-11-19 18:14:11.795626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.171 [2024-11-19 18:14:12.519737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.171 Malloc0 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.171 Malloc1 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:11.171 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.172 [2024-11-19 18:14:12.631721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.172 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:11.433 00:15:11.433 Discovery Log Number of Records 2, Generation counter 2 00:15:11.433 =====Discovery Log Entry 0====== 00:15:11.433 trtype: tcp 00:15:11.433 adrfam: ipv4 00:15:11.433 subtype: current discovery subsystem 00:15:11.433 treq: not required 00:15:11.433 portid: 0 00:15:11.433 trsvcid: 4420 00:15:11.433 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:11.433 traddr: 10.0.0.2 00:15:11.433 eflags: explicit discovery connections, duplicate discovery information 00:15:11.433 sectype: none 00:15:11.433 =====Discovery Log Entry 1====== 00:15:11.433 trtype: tcp 00:15:11.433 adrfam: ipv4 00:15:11.433 subtype: nvme subsystem 00:15:11.433 treq: not required 00:15:11.433 portid: 0 00:15:11.433 trsvcid: 4420 00:15:11.433 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:11.433 traddr: 10.0.0.2 00:15:11.433 eflags: none 00:15:11.433 sectype: none 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:11.433 18:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.350 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:13.350 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:13.350 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.350 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:13.350 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:13.350 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:15.267 /dev/nvme0n2 ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:15.267 rmmod nvme_tcp 00:15:15.267 rmmod nvme_fabrics 00:15:15.267 rmmod nvme_keyring 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1936319 ']' 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1936319 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1936319 ']' 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1936319 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1936319 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1936319' 00:15:15.267 killing process with pid 1936319 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1936319 00:15:15.267 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1936319 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.529 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.443 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:17.443 00:15:17.443 real 0m15.091s 00:15:17.443 user 0m22.298s 00:15:17.443 sys 0m6.424s 00:15:17.443 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.443 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.443 ************************************ 00:15:17.443 END TEST nvmf_nvme_cli 00:15:17.443 ************************************ 00:15:17.704 18:14:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:17.704 18:14:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:17.704 18:14:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:17.704 18:14:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.704 18:14:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:17.704 ************************************ 00:15:17.704 START TEST nvmf_vfio_user 00:15:17.704 ************************************ 00:15:17.704 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:17.704 * Looking for test storage... 00:15:17.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.704 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:17.705 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:17.705 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.705 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.705 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:17.705 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:17.705 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.705 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:17.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.966 --rc genhtml_branch_coverage=1 00:15:17.966 --rc genhtml_function_coverage=1 00:15:17.966 --rc genhtml_legend=1 00:15:17.966 --rc geninfo_all_blocks=1 00:15:17.966 --rc geninfo_unexecuted_blocks=1 00:15:17.966 00:15:17.966 ' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:17.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.966 --rc genhtml_branch_coverage=1 00:15:17.966 --rc genhtml_function_coverage=1 00:15:17.966 --rc genhtml_legend=1 00:15:17.966 --rc geninfo_all_blocks=1 00:15:17.966 --rc geninfo_unexecuted_blocks=1 00:15:17.966 00:15:17.966 ' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:17.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.966 --rc genhtml_branch_coverage=1 00:15:17.966 --rc genhtml_function_coverage=1 00:15:17.966 --rc genhtml_legend=1 00:15:17.966 --rc geninfo_all_blocks=1 00:15:17.966 --rc geninfo_unexecuted_blocks=1 00:15:17.966 00:15:17.966 ' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:17.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.966 --rc genhtml_branch_coverage=1 00:15:17.966 --rc genhtml_function_coverage=1 00:15:17.966 --rc genhtml_legend=1 00:15:17.966 --rc geninfo_all_blocks=1 00:15:17.966 --rc geninfo_unexecuted_blocks=1 00:15:17.966 00:15:17.966 ' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1937919 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1937919' 00:15:17.966 Process pid: 1937919 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1937919 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1937919 ']' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.966 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:17.966 [2024-11-19 18:14:19.288078] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:15:17.966 [2024-11-19 18:14:19.288150] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.966 [2024-11-19 18:14:19.377555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.966 [2024-11-19 18:14:19.418218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.966 [2024-11-19 18:14:19.418252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.966 [2024-11-19 18:14:19.418258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.966 [2024-11-19 18:14:19.418263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.966 [2024-11-19 18:14:19.418267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.966 [2024-11-19 18:14:19.419737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.966 [2024-11-19 18:14:19.419890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.966 [2024-11-19 18:14:19.420042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.966 [2024-11-19 18:14:19.420044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.908 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.908 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:18.908 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:19.850 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:19.850 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:19.850 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:19.850 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.850 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:19.850 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:20.111 Malloc1 00:15:20.111 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:20.371 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:20.634 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:20.634 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.634 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:20.634 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:20.895 Malloc2 00:15:20.895 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:21.157 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:21.157 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:21.418 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:21.418 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:21.418 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.418 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.418 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.418 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:21.419 [2024-11-19 18:14:22.813457] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:15:21.419 [2024-11-19 18:14:22.813528] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938610 ] 00:15:21.419 [2024-11-19 18:14:22.854450] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:21.419 [2024-11-19 18:14:22.856725] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.419 [2024-11-19 18:14:22.856742] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa0d3a87000 00:15:21.419 [2024-11-19 18:14:22.857742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.419 [2024-11-19 18:14:22.858733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.419 [2024-11-19 18:14:22.861162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.419 [2024-11-19 18:14:22.861746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.419 [2024-11-19 18:14:22.862751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.419 [2024-11-19 18:14:22.863760] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.419 [2024-11-19 18:14:22.864762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.419 [2024-11-19 18:14:22.865771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.419 [2024-11-19 18:14:22.866780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.419 [2024-11-19 18:14:22.866787] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa0d3a7c000 00:15:21.419 [2024-11-19 18:14:22.867699] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.419 [2024-11-19 18:14:22.879295] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:21.419 [2024-11-19 18:14:22.879316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:21.419 [2024-11-19 18:14:22.884889] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.419 [2024-11-19 18:14:22.884928] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:21.419 [2024-11-19 18:14:22.884991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:21.419 [2024-11-19 18:14:22.885004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:21.419 [2024-11-19 18:14:22.885008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:21.419 [2024-11-19 18:14:22.885896] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:21.419 [2024-11-19 18:14:22.885903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:21.419 [2024-11-19 18:14:22.885908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:21.419 [2024-11-19 18:14:22.886903] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.419 [2024-11-19 18:14:22.886910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:21.419 [2024-11-19 18:14:22.886916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.683 [2024-11-19 18:14:22.887908] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:21.683 [2024-11-19 18:14:22.887916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.683 [2024-11-19 18:14:22.888917] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:21.683 [2024-11-19 18:14:22.888925] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:21.683 [2024-11-19 18:14:22.888929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:21.683 [2024-11-19 18:14:22.888934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.683 [2024-11-19 18:14:22.889040] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:21.683 [2024-11-19 18:14:22.889044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.683 [2024-11-19 18:14:22.889048] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:21.683 [2024-11-19 18:14:22.889928] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:21.683 [2024-11-19 18:14:22.890927] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:21.683 [2024-11-19 18:14:22.891937] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.683 [2024-11-19 18:14:22.892930] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.683 [2024-11-19 18:14:22.892980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.683 [2024-11-19 18:14:22.893941] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:21.683 [2024-11-19 18:14:22.893947] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.683 [2024-11-19 18:14:22.893951] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.893966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:21.683 [2024-11-19 18:14:22.893971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.893983] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.683 [2024-11-19 18:14:22.893986] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.683 [2024-11-19 18:14:22.893989] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.683 [2024-11-19 18:14:22.894000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.683 [2024-11-19 18:14:22.894033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:21.683 [2024-11-19 18:14:22.894041] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:21.683 [2024-11-19 18:14:22.894044] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:21.683 [2024-11-19 18:14:22.894047] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:21.683 [2024-11-19 18:14:22.894051] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:21.683 [2024-11-19 18:14:22.894056] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:21.683 [2024-11-19 18:14:22.894061] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:21.683 [2024-11-19 18:14:22.894064] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:21.683 [2024-11-19 18:14:22.894091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:21.683 [2024-11-19 18:14:22.894099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.683 [2024-11-19 18:14:22.894105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.683 [2024-11-19 18:14:22.894111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.683 [2024-11-19 18:14:22.894117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.683 [2024-11-19 18:14:22.894120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:21.683 [2024-11-19 18:14:22.894141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:21.683 [2024-11-19 18:14:22.894147] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:21.683 [2024-11-19 18:14:22.894150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894170] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.683 [2024-11-19 18:14:22.894177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:21.683 [2024-11-19 18:14:22.894221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894232] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:21.683 [2024-11-19 18:14:22.894235] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:21.683 [2024-11-19 18:14:22.894238] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.683 [2024-11-19 18:14:22.894244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:21.683 [2024-11-19 18:14:22.894255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:21.683 [2024-11-19 18:14:22.894262] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:21.683 [2024-11-19 18:14:22.894271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.683 [2024-11-19 18:14:22.894282] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.683 [2024-11-19 18:14:22.894285] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.683 [2024-11-19 18:14:22.894287] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.683 [2024-11-19 18:14:22.894292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.683 [2024-11-19 18:14:22.894306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:21.684 [2024-11-19 18:14:22.894317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.684 [2024-11-19 18:14:22.894322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.684 [2024-11-19 18:14:22.894327] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.684 [2024-11-19 18:14:22.894330] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.684 [2024-11-19 18:14:22.894333] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.684 [2024-11-19 18:14:22.894337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.684 [2024-11-19 18:14:22.894349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:21.684 [2024-11-19 18:14:22.894355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.684 [2024-11-19 18:14:22.894360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:21.684 [2024-11-19 18:14:22.894366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:21.684 [2024-11-19 18:14:22.894370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:21.684 [2024-11-19 18:14:22.894374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.684 [2024-11-19 18:14:22.894378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:21.684 [2024-11-19 18:14:22.894381] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.684 [2024-11-19 18:14:22.894385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:21.684 [2024-11-19 18:14:22.894388] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:21.684 [2024-11-19 18:14:22.894403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:21.684 [2024-11-19 18:14:22.894411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:21.684 [2024-11-19 18:14:22.894420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:21.684 [2024-11-19 18:14:22.894431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:21.684 [2024-11-19 18:14:22.894440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:21.684 [2024-11-19 18:14:22.894453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:21.684 [2024-11-19 18:14:22.894461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.684 [2024-11-19 18:14:22.894471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:21.684 [2024-11-19 18:14:22.894481] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:21.684 [2024-11-19 18:14:22.894484] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:21.684 [2024-11-19 18:14:22.894487] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:21.684 [2024-11-19 18:14:22.894489] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:21.684 [2024-11-19 18:14:22.894491] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:21.684 [2024-11-19 18:14:22.894496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:21.684 [2024-11-19 18:14:22.894501] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:21.684 [2024-11-19 18:14:22.894504] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:21.684 [2024-11-19 18:14:22.894507] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.684 [2024-11-19 18:14:22.894511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:21.684 [2024-11-19 18:14:22.894516] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:21.684 [2024-11-19 18:14:22.894519] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.684 [2024-11-19 18:14:22.894522] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.684 [2024-11-19 18:14:22.894526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.684 [2024-11-19 18:14:22.894531] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:21.684 [2024-11-19 18:14:22.894534] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:21.684 [2024-11-19 18:14:22.894537] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.684 [2024-11-19 18:14:22.894541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:21.684 [2024-11-19 18:14:22.894546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:21.684 [2024-11-19 18:14:22.894554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:21.684 [2024-11-19 18:14:22.894563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:21.684 [2024-11-19 18:14:22.894568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:21.684 ===================================================== 00:15:21.684 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:21.684 ===================================================== 00:15:21.684 Controller Capabilities/Features 00:15:21.684 ================================ 00:15:21.684 Vendor ID: 4e58 00:15:21.684 Subsystem Vendor ID: 4e58 00:15:21.684 Serial Number: SPDK1 00:15:21.684 Model Number: SPDK bdev Controller 00:15:21.684 Firmware Version: 25.01 00:15:21.684 Recommended Arb Burst: 6 00:15:21.684 IEEE OUI Identifier: 8d 6b 50 00:15:21.684 Multi-path I/O 00:15:21.684 May have multiple subsystem ports: Yes 00:15:21.684 May have multiple controllers: Yes 00:15:21.684 Associated with SR-IOV VF: No 00:15:21.684 Max Data Transfer Size: 131072 00:15:21.684 Max Number of Namespaces: 32 00:15:21.684 Max Number of I/O Queues: 127 00:15:21.684 NVMe Specification Version (VS): 1.3 00:15:21.684 NVMe Specification Version (Identify): 1.3 00:15:21.684 Maximum Queue Entries: 256 00:15:21.684 Contiguous Queues Required: Yes 00:15:21.684 Arbitration Mechanisms Supported 00:15:21.684 Weighted Round Robin: Not Supported 00:15:21.684 Vendor Specific: Not Supported 00:15:21.684 Reset Timeout: 15000 ms 00:15:21.684 Doorbell Stride: 4 bytes 00:15:21.684 NVM Subsystem Reset: Not Supported 00:15:21.684 Command Sets Supported 00:15:21.684 NVM Command Set: Supported 00:15:21.684 Boot Partition: Not Supported 00:15:21.684 Memory Page Size Minimum: 4096 bytes 00:15:21.684 Memory Page Size Maximum: 4096 bytes 00:15:21.684 Persistent Memory Region: Not Supported 00:15:21.684 Optional Asynchronous Events Supported 00:15:21.684 Namespace Attribute Notices: Supported 00:15:21.684 Firmware Activation Notices: Not Supported 00:15:21.684 ANA Change Notices: Not Supported 00:15:21.684 PLE Aggregate Log Change Notices: Not Supported 00:15:21.684 LBA Status Info Alert Notices: Not Supported 00:15:21.684 EGE Aggregate Log Change Notices: Not Supported 00:15:21.684 Normal NVM Subsystem Shutdown event: Not Supported 00:15:21.684 Zone Descriptor Change Notices: Not Supported 00:15:21.684 Discovery Log Change Notices: Not Supported 00:15:21.684 Controller Attributes 00:15:21.684 128-bit Host Identifier: Supported 00:15:21.684 Non-Operational Permissive Mode: Not Supported 00:15:21.684 NVM Sets: Not Supported 00:15:21.684 Read Recovery Levels: Not Supported 00:15:21.684 Endurance Groups: Not Supported 00:15:21.684 Predictable Latency Mode: Not Supported 00:15:21.684 Traffic Based Keep ALive: Not Supported 00:15:21.684 Namespace Granularity: Not Supported 00:15:21.684 SQ Associations: Not Supported 00:15:21.684 UUID List: Not Supported 00:15:21.684 Multi-Domain Subsystem: Not Supported 00:15:21.684 Fixed Capacity Management: Not Supported 00:15:21.684 Variable Capacity Management: Not Supported 00:15:21.684 Delete Endurance Group: Not Supported 00:15:21.684 Delete NVM Set: Not Supported 00:15:21.684 Extended LBA Formats Supported: Not Supported 00:15:21.684 Flexible Data Placement Supported: Not Supported 00:15:21.684 00:15:21.684 Controller Memory Buffer Support 00:15:21.684 ================================ 00:15:21.684 Supported: No 00:15:21.684 00:15:21.684 Persistent Memory Region Support 00:15:21.684 ================================ 00:15:21.684 Supported: No 00:15:21.684 00:15:21.684 Admin Command Set Attributes 00:15:21.684 ============================ 00:15:21.684 Security Send/Receive: Not Supported 00:15:21.684 Format NVM: Not Supported 00:15:21.684 Firmware Activate/Download: Not Supported 00:15:21.685 Namespace Management: Not Supported 00:15:21.685 Device Self-Test: Not Supported 00:15:21.685 Directives: Not Supported 00:15:21.685 NVMe-MI: Not Supported 00:15:21.685 Virtualization Management: Not Supported 00:15:21.685 Doorbell Buffer Config: Not Supported 00:15:21.685 Get LBA Status Capability: Not Supported 00:15:21.685 Command & Feature Lockdown Capability: Not Supported 00:15:21.685 Abort Command Limit: 4 00:15:21.685 Async Event Request Limit: 4 00:15:21.685 Number of Firmware Slots: N/A 00:15:21.685 Firmware Slot 1 Read-Only: N/A 00:15:21.685 Firmware Activation Without Reset: N/A 00:15:21.685 Multiple Update Detection Support: N/A 00:15:21.685 Firmware Update Granularity: No Information Provided 00:15:21.685 Per-Namespace SMART Log: No 00:15:21.685 Asymmetric Namespace Access Log Page: Not Supported 00:15:21.685 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:21.685 Command Effects Log Page: Supported 00:15:21.685 Get Log Page Extended Data: Supported 00:15:21.685 Telemetry Log Pages: Not Supported 00:15:21.685 Persistent Event Log Pages: Not Supported 00:15:21.685 Supported Log Pages Log Page: May Support 00:15:21.685 Commands Supported & Effects Log Page: Not Supported 00:15:21.685 Feature Identifiers & Effects Log Page:May Support 00:15:21.685 NVMe-MI Commands & Effects Log Page: May Support 00:15:21.685 Data Area 4 for Telemetry Log: Not Supported 00:15:21.685 Error Log Page Entries Supported: 128 00:15:21.685 Keep Alive: Supported 00:15:21.685 Keep Alive Granularity: 10000 ms 00:15:21.685 00:15:21.685 NVM Command Set Attributes 00:15:21.685 ========================== 00:15:21.685 Submission Queue Entry Size 00:15:21.685 Max: 64 00:15:21.685 Min: 64 00:15:21.685 Completion Queue Entry Size 00:15:21.685 Max: 16 00:15:21.685 Min: 16 00:15:21.685 Number of Namespaces: 32 00:15:21.685 Compare Command: Supported 00:15:21.685 Write Uncorrectable Command: Not Supported 00:15:21.685 Dataset Management Command: Supported 00:15:21.685 Write Zeroes Command: Supported 00:15:21.685 Set Features Save Field: Not Supported 00:15:21.685 Reservations: Not Supported 00:15:21.685 Timestamp: Not Supported 00:15:21.685 Copy: Supported 00:15:21.685 Volatile Write Cache: Present 00:15:21.685 Atomic Write Unit (Normal): 1 00:15:21.685 Atomic Write Unit (PFail): 1 00:15:21.685 Atomic Compare & Write Unit: 1 00:15:21.685 Fused Compare & Write: Supported 00:15:21.685 Scatter-Gather List 00:15:21.685 SGL Command Set: Supported (Dword aligned) 00:15:21.685 SGL Keyed: Not Supported 00:15:21.685 SGL Bit Bucket Descriptor: Not Supported 00:15:21.685 SGL Metadata Pointer: Not Supported 00:15:21.685 Oversized SGL: Not Supported 00:15:21.685 SGL Metadata Address: Not Supported 00:15:21.685 SGL Offset: Not Supported 00:15:21.685 Transport SGL Data Block: Not Supported 00:15:21.685 Replay Protected Memory Block: Not Supported 00:15:21.685 00:15:21.685 Firmware Slot Information 00:15:21.685 ========================= 00:15:21.685 Active slot: 1 00:15:21.685 Slot 1 Firmware Revision: 25.01 00:15:21.685 00:15:21.685 00:15:21.685 Commands Supported and Effects 00:15:21.685 ============================== 00:15:21.685 Admin Commands 00:15:21.685 -------------- 00:15:21.685 Get Log Page (02h): Supported 00:15:21.685 Identify (06h): Supported 00:15:21.685 Abort (08h): Supported 00:15:21.685 Set Features (09h): Supported 00:15:21.685 Get Features (0Ah): Supported 00:15:21.685 Asynchronous Event Request (0Ch): Supported 00:15:21.685 Keep Alive (18h): Supported 00:15:21.685 I/O Commands 00:15:21.685 ------------ 00:15:21.685 Flush (00h): Supported LBA-Change 00:15:21.685 Write (01h): Supported LBA-Change 00:15:21.685 Read (02h): Supported 00:15:21.685 Compare (05h): Supported 00:15:21.685 Write Zeroes (08h): Supported LBA-Change 00:15:21.685 Dataset Management (09h): Supported LBA-Change 00:15:21.685 Copy (19h): Supported LBA-Change 00:15:21.685 00:15:21.685 Error Log 00:15:21.685 ========= 00:15:21.685 00:15:21.685 Arbitration 00:15:21.685 =========== 00:15:21.685 Arbitration Burst: 1 00:15:21.685 00:15:21.685 Power Management 00:15:21.685 ================ 00:15:21.685 Number of Power States: 1 00:15:21.685 Current Power State: Power State #0 00:15:21.685 Power State #0: 00:15:21.685 Max Power: 0.00 W 00:15:21.685 Non-Operational State: Operational 00:15:21.685 Entry Latency: Not Reported 00:15:21.685 Exit Latency: Not Reported 00:15:21.685 Relative Read Throughput: 0 00:15:21.685 Relative Read Latency: 0 00:15:21.685 Relative Write Throughput: 0 00:15:21.685 Relative Write Latency: 0 00:15:21.685 Idle Power: Not Reported 00:15:21.685 Active Power: Not Reported 00:15:21.685 Non-Operational Permissive Mode: Not Supported 00:15:21.685 00:15:21.685 Health Information 00:15:21.685 ================== 00:15:21.685 Critical Warnings: 00:15:21.685 Available Spare Space: OK 00:15:21.685 Temperature: OK 00:15:21.685 Device Reliability: OK 00:15:21.685 Read Only: No 00:15:21.685 Volatile Memory Backup: OK 00:15:21.685 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:21.685 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:21.685 Available Spare: 0% 00:15:21.685 Available Sp[2024-11-19 18:14:22.894641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:21.685 [2024-11-19 18:14:22.894647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:21.685 [2024-11-19 18:14:22.894668] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:21.685 [2024-11-19 18:14:22.894675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.685 [2024-11-19 18:14:22.894680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.685 [2024-11-19 18:14:22.894684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.685 [2024-11-19 18:14:22.894689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.685 [2024-11-19 18:14:22.894956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.685 [2024-11-19 18:14:22.894964] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:21.685 [2024-11-19 18:14:22.895958] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.685 [2024-11-19 18:14:22.895999] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:21.685 [2024-11-19 18:14:22.896004] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:21.685 [2024-11-19 18:14:22.896965] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:21.685 [2024-11-19 18:14:22.896973] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:21.685 [2024-11-19 18:14:22.897031] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:21.685 [2024-11-19 18:14:22.897980] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.685 are Threshold: 0% 00:15:21.685 Life Percentage Used: 0% 00:15:21.685 Data Units Read: 0 00:15:21.685 Data Units Written: 0 00:15:21.685 Host Read Commands: 0 00:15:21.685 Host Write Commands: 0 00:15:21.685 Controller Busy Time: 0 minutes 00:15:21.685 Power Cycles: 0 00:15:21.685 Power On Hours: 0 hours 00:15:21.685 Unsafe Shutdowns: 0 00:15:21.685 Unrecoverable Media Errors: 0 00:15:21.685 Lifetime Error Log Entries: 0 00:15:21.685 Warning Temperature Time: 0 minutes 00:15:21.685 Critical Temperature Time: 0 minutes 00:15:21.685 00:15:21.685 Number of Queues 00:15:21.685 ================ 00:15:21.685 Number of I/O Submission Queues: 127 00:15:21.685 Number of I/O Completion Queues: 127 00:15:21.685 00:15:21.685 Active Namespaces 00:15:21.685 ================= 00:15:21.685 Namespace ID:1 00:15:21.685 Error Recovery Timeout: Unlimited 00:15:21.685 Command Set Identifier: NVM (00h) 00:15:21.685 Deallocate: Supported 00:15:21.685 Deallocated/Unwritten Error: Not Supported 00:15:21.685 Deallocated Read Value: Unknown 00:15:21.685 Deallocate in Write Zeroes: Not Supported 00:15:21.685 Deallocated Guard Field: 0xFFFF 00:15:21.685 Flush: Supported 00:15:21.685 Reservation: Supported 00:15:21.685 Namespace Sharing Capabilities: Multiple Controllers 00:15:21.685 Size (in LBAs): 131072 (0GiB) 00:15:21.685 Capacity (in LBAs): 131072 (0GiB) 00:15:21.685 Utilization (in LBAs): 131072 (0GiB) 00:15:21.685 NGUID: DA620DFE91D74930A3F4D7133A1DA4DE 00:15:21.686 UUID: da620dfe-91d7-4930-a3f4-d7133a1da4de 00:15:21.686 Thin Provisioning: Not Supported 00:15:21.686 Per-NS Atomic Units: Yes 00:15:21.686 Atomic Boundary Size (Normal): 0 00:15:21.686 Atomic Boundary Size (PFail): 0 00:15:21.686 Atomic Boundary Offset: 0 00:15:21.686 Maximum Single Source Range Length: 65535 00:15:21.686 Maximum Copy Length: 65535 00:15:21.686 Maximum Source Range Count: 1 00:15:21.686 NGUID/EUI64 Never Reused: No 00:15:21.686 Namespace Write Protected: No 00:15:21.686 Number of LBA Formats: 1 00:15:21.686 Current LBA Format: LBA Format #00 00:15:21.686 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:21.686 00:15:21.686 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:21.686 [2024-11-19 18:14:23.087820] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:26.980 Initializing NVMe Controllers 00:15:26.980 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:26.980 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:26.980 Initialization complete. Launching workers. 00:15:26.980 ======================================================== 00:15:26.980 Latency(us) 00:15:26.980 Device Information : IOPS MiB/s Average min max 00:15:26.980 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39989.60 156.21 3200.71 849.11 9341.19 00:15:26.980 ======================================================== 00:15:26.980 Total : 39989.60 156.21 3200.71 849.11 9341.19 00:15:26.980 00:15:26.980 [2024-11-19 18:14:28.107758] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:26.980 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:26.980 [2024-11-19 18:14:28.296552] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:32.292 Initializing NVMe Controllers 00:15:32.292 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:32.292 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:32.292 Initialization complete. Launching workers. 00:15:32.292 ======================================================== 00:15:32.292 Latency(us) 00:15:32.292 Device Information : IOPS MiB/s Average min max 00:15:32.292 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16001.54 62.51 8004.79 6989.39 15962.05 00:15:32.292 ======================================================== 00:15:32.292 Total : 16001.54 62.51 8004.79 6989.39 15962.05 00:15:32.292 00:15:32.292 [2024-11-19 18:14:33.341125] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:32.292 18:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:32.292 [2024-11-19 18:14:33.548000] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.588 [2024-11-19 18:14:38.615385] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.588 Initializing NVMe Controllers 00:15:37.588 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:37.588 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:37.588 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:37.588 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:37.588 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:37.588 Initialization complete. Launching workers. 00:15:37.588 Starting thread on core 2 00:15:37.588 Starting thread on core 3 00:15:37.588 Starting thread on core 1 00:15:37.588 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:37.588 [2024-11-19 18:14:38.873504] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:40.892 [2024-11-19 18:14:42.112121] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:40.892 Initializing NVMe Controllers 00:15:40.892 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.892 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.892 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:40.892 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:40.892 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:40.892 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:40.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:40.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:40.892 Initialization complete. Launching workers. 00:15:40.892 Starting thread on core 1 with urgent priority queue 00:15:40.892 Starting thread on core 2 with urgent priority queue 00:15:40.892 Starting thread on core 3 with urgent priority queue 00:15:40.892 Starting thread on core 0 with urgent priority queue 00:15:40.892 SPDK bdev Controller (SPDK1 ) core 0: 8363.67 IO/s 11.96 secs/100000 ios 00:15:40.892 SPDK bdev Controller (SPDK1 ) core 1: 11157.00 IO/s 8.96 secs/100000 ios 00:15:40.892 SPDK bdev Controller (SPDK1 ) core 2: 9408.33 IO/s 10.63 secs/100000 ios 00:15:40.892 SPDK bdev Controller (SPDK1 ) core 3: 12035.33 IO/s 8.31 secs/100000 ios 00:15:40.892 ======================================================== 00:15:40.892 00:15:40.892 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:40.892 [2024-11-19 18:14:42.358573] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.152 Initializing NVMe Controllers 00:15:41.152 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.152 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.152 Namespace ID: 1 size: 0GB 00:15:41.152 Initialization complete. 00:15:41.152 INFO: using host memory buffer for IO 00:15:41.152 Hello world! 00:15:41.152 [2024-11-19 18:14:42.394780] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.152 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.411 [2024-11-19 18:14:42.632561] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:42.355 Initializing NVMe Controllers 00:15:42.355 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.355 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.355 Initialization complete. Launching workers. 00:15:42.355 submit (in ns) avg, min, max = 5963.9, 2830.8, 3999395.0 00:15:42.355 complete (in ns) avg, min, max = 16650.4, 1623.3, 7987165.0 00:15:42.355 00:15:42.355 Submit histogram 00:15:42.355 ================ 00:15:42.355 Range in us Cumulative Count 00:15:42.355 2.827 - 2.840: 0.4236% ( 85) 00:15:42.355 2.840 - 2.853: 1.5201% ( 220) 00:15:42.355 2.853 - 2.867: 3.5038% ( 398) 00:15:42.355 2.867 - 2.880: 8.2037% ( 943) 00:15:42.355 2.880 - 2.893: 13.3921% ( 1041) 00:15:42.355 2.893 - 2.907: 19.3979% ( 1205) 00:15:42.355 2.907 - 2.920: 25.8971% ( 1304) 00:15:42.355 2.920 - 2.933: 31.0855% ( 1041) 00:15:42.355 2.933 - 2.947: 37.2807% ( 1243) 00:15:42.355 2.947 - 2.960: 42.7183% ( 1091) 00:15:42.355 2.960 - 2.973: 48.4201% ( 1144) 00:15:42.355 2.973 - 2.987: 54.6003% ( 1240) 00:15:42.355 2.987 - 3.000: 62.7691% ( 1639) 00:15:42.355 3.000 - 3.013: 72.0594% ( 1864) 00:15:42.355 3.013 - 3.027: 81.1653% ( 1827) 00:15:42.355 3.027 - 3.040: 88.2526% ( 1422) 00:15:42.355 3.040 - 3.053: 93.1419% ( 981) 00:15:42.355 3.053 - 3.067: 96.2669% ( 627) 00:15:42.355 3.067 - 3.080: 98.0463% ( 357) 00:15:42.355 3.080 - 3.093: 98.9234% ( 176) 00:15:42.355 3.093 - 3.107: 99.3222% ( 80) 00:15:42.355 3.107 - 3.120: 99.4866% ( 33) 00:15:42.355 3.120 - 3.133: 99.5614% ( 15) 00:15:42.355 3.133 - 3.147: 99.5813% ( 4) 00:15:42.355 3.147 - 3.160: 99.5913% ( 2) 00:15:42.355 3.160 - 3.173: 99.6013% ( 2) 00:15:42.355 3.173 - 3.187: 99.6063% ( 1) 00:15:42.355 3.240 - 3.253: 99.6162% ( 2) 00:15:42.355 3.293 - 3.307: 99.6212% ( 1) 00:15:42.355 3.307 - 3.320: 99.6262% ( 1) 00:15:42.355 3.320 - 3.333: 99.6312% ( 1) 00:15:42.355 3.360 - 3.373: 99.6362% ( 1) 00:15:42.355 3.413 - 3.440: 99.6411% ( 1) 00:15:42.355 3.440 - 3.467: 99.6511% ( 2) 00:15:42.355 3.493 - 3.520: 99.6561% ( 1) 00:15:42.355 3.573 - 3.600: 99.6661% ( 2) 00:15:42.355 3.680 - 3.707: 99.6711% ( 1) 00:15:42.355 3.733 - 3.760: 99.6760% ( 1) 00:15:42.355 3.813 - 3.840: 99.6810% ( 1) 00:15:42.355 3.893 - 3.920: 99.6860% ( 1) 00:15:42.356 4.213 - 4.240: 99.6910% ( 1) 00:15:42.356 4.693 - 4.720: 99.7010% ( 2) 00:15:42.356 4.773 - 4.800: 99.7059% ( 1) 00:15:42.356 4.880 - 4.907: 99.7109% ( 1) 00:15:42.356 4.987 - 5.013: 99.7159% ( 1) 00:15:42.356 5.013 - 5.040: 99.7209% ( 1) 00:15:42.356 5.067 - 5.093: 99.7508% ( 6) 00:15:42.356 5.093 - 5.120: 99.7608% ( 2) 00:15:42.356 5.120 - 5.147: 99.7657% ( 1) 00:15:42.356 5.573 - 5.600: 99.7707% ( 1) 00:15:42.356 5.707 - 5.733: 99.7757% ( 1) 00:15:42.356 5.920 - 5.947: 99.7807% ( 1) 00:15:42.356 6.027 - 6.053: 99.7857% ( 1) 00:15:42.356 6.080 - 6.107: 99.7907% ( 1) 00:15:42.356 6.107 - 6.133: 99.7957% ( 1) 00:15:42.356 6.267 - 6.293: 99.8006% ( 1) 00:15:42.356 6.400 - 6.427: 99.8056% ( 1) 00:15:42.356 6.427 - 6.453: 99.8106% ( 1) 00:15:42.356 6.480 - 6.507: 99.8156% ( 1) 00:15:42.356 6.640 - 6.667: 99.8206% ( 1) 00:15:42.356 6.693 - 6.720: 99.8256% ( 1) 00:15:42.356 6.773 - 6.800: 99.8305% ( 1) 00:15:42.356 6.800 - 6.827: 99.8355% ( 1) 00:15:42.356 6.827 - 6.880: 99.8405% ( 1) 00:15:42.356 6.880 - 6.933: 99.8455% ( 1) 00:15:42.356 7.040 - 7.093: 99.8555% ( 2) 00:15:42.356 7.200 - 7.253: 99.8604% ( 1) 00:15:42.356 7.253 - 7.307: 99.8654% ( 1) 00:15:42.356 7.307 - 7.360: 99.8704% ( 1) 00:15:42.356 7.360 - 7.413: 99.8754% ( 1) 00:15:42.356 7.413 - 7.467: 99.8854% ( 2) 00:15:42.356 7.467 - 7.520: 99.8904% ( 1) 00:15:42.356 7.520 - 7.573: 99.9053% ( 3) 00:15:42.356 7.787 - 7.840: 99.9103% ( 1) 00:15:42.356 7.840 - 7.893: 99.9153% ( 1) 00:15:42.356 8.107 - 8.160: 99.9203% ( 1) 00:15:42.356 8.267 - 8.320: 99.9252% ( 1) 00:15:42.356 3986.773 - 4014.080: 100.0000% ( 15) 00:15:42.356 00:15:42.356 Complete histogram 00:15:42.356 ================== 00:15:42.356 Ra[2024-11-19 18:14:43.651168] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.356 nge in us Cumulative Count 00:15:42.356 1.620 - 1.627: 0.0050% ( 1) 00:15:42.356 1.633 - 1.640: 0.2542% ( 50) 00:15:42.356 1.640 - 1.647: 0.9370% ( 137) 00:15:42.356 1.647 - 1.653: 1.0467% ( 22) 00:15:42.356 1.653 - 1.660: 1.1264% ( 16) 00:15:42.356 1.660 - 1.667: 1.2161% ( 18) 00:15:42.356 1.667 - 1.673: 1.2311% ( 3) 00:15:42.356 1.673 - 1.680: 12.0016% ( 2161) 00:15:42.356 1.680 - 1.687: 48.0263% ( 7228) 00:15:42.356 1.687 - 1.693: 54.4109% ( 1281) 00:15:42.356 1.693 - 1.700: 65.1366% ( 2152) 00:15:42.356 1.700 - 1.707: 73.7789% ( 1734) 00:15:42.356 1.707 - 1.720: 81.9428% ( 1638) 00:15:42.356 1.720 - 1.733: 83.3533% ( 283) 00:15:42.356 1.733 - 1.747: 87.1411% ( 760) 00:15:42.356 1.747 - 1.760: 92.7632% ( 1128) 00:15:42.356 1.760 - 1.773: 96.9049% ( 831) 00:15:42.356 1.773 - 1.787: 98.7291% ( 366) 00:15:42.356 1.787 - 1.800: 99.2923% ( 113) 00:15:42.356 1.800 - 1.813: 99.3720% ( 16) 00:15:42.356 1.813 - 1.827: 99.3919% ( 4) 00:15:42.356 1.853 - 1.867: 99.3969% ( 1) 00:15:42.356 1.880 - 1.893: 99.4019% ( 1) 00:15:42.356 1.893 - 1.907: 99.4069% ( 1) 00:15:42.356 2.040 - 2.053: 99.4119% ( 1) 00:15:42.356 3.240 - 3.253: 99.4169% ( 1) 00:15:42.356 4.080 - 4.107: 99.4219% ( 1) 00:15:42.356 4.133 - 4.160: 99.4268% ( 1) 00:15:42.356 4.213 - 4.240: 99.4318% ( 1) 00:15:42.356 4.293 - 4.320: 99.4368% ( 1) 00:15:42.356 4.320 - 4.347: 99.4418% ( 1) 00:15:42.356 4.400 - 4.427: 99.4468% ( 1) 00:15:42.356 4.587 - 4.613: 99.4518% ( 1) 00:15:42.356 4.827 - 4.853: 99.4567% ( 1) 00:15:42.356 4.880 - 4.907: 99.4617% ( 1) 00:15:42.356 5.040 - 5.067: 99.4667% ( 1) 00:15:42.356 5.067 - 5.093: 99.4717% ( 1) 00:15:42.356 5.173 - 5.200: 99.4767% ( 1) 00:15:42.356 5.200 - 5.227: 99.4866% ( 2) 00:15:42.356 5.227 - 5.253: 99.4966% ( 2) 00:15:42.356 5.253 - 5.280: 99.5016% ( 1) 00:15:42.356 5.307 - 5.333: 99.5066% ( 1) 00:15:42.356 5.333 - 5.360: 99.5116% ( 1) 00:15:42.356 5.387 - 5.413: 99.5165% ( 1) 00:15:42.356 5.467 - 5.493: 99.5215% ( 1) 00:15:42.356 5.547 - 5.573: 99.5265% ( 1) 00:15:42.356 5.653 - 5.680: 99.5315% ( 1) 00:15:42.356 5.813 - 5.840: 99.5365% ( 1) 00:15:42.356 5.867 - 5.893: 99.5415% ( 1) 00:15:42.356 5.893 - 5.920: 99.5465% ( 1) 00:15:42.356 5.947 - 5.973: 99.5514% ( 1) 00:15:42.356 6.107 - 6.133: 99.5564% ( 1) 00:15:42.356 6.240 - 6.267: 99.5614% ( 1) 00:15:42.356 6.400 - 6.427: 99.5664% ( 1) 00:15:42.356 6.453 - 6.480: 99.5714% ( 1) 00:15:42.356 6.480 - 6.507: 99.5764% ( 1) 00:15:42.356 6.560 - 6.587: 99.5813% ( 1) 00:15:42.356 6.667 - 6.693: 99.5863% ( 1) 00:15:42.356 6.693 - 6.720: 99.5913% ( 1) 00:15:42.356 6.880 - 6.933: 99.5963% ( 1) 00:15:42.356 6.987 - 7.040: 99.6013% ( 1) 00:15:42.356 7.093 - 7.147: 99.6063% ( 1) 00:15:42.356 7.200 - 7.253: 99.6162% ( 2) 00:15:42.356 8.213 - 8.267: 99.6212% ( 1) 00:15:42.356 11.733 - 11.787: 99.6262% ( 1) 00:15:42.356 126.293 - 127.147: 99.6312% ( 1) 00:15:42.356 3986.773 - 4014.080: 99.9900% ( 72) 00:15:42.356 4014.080 - 4041.387: 99.9950% ( 1) 00:15:42.356 7973.547 - 8028.160: 100.0000% ( 1) 00:15:42.356 00:15:42.356 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:42.356 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:42.356 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:42.356 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:42.356 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.617 [ 00:15:42.617 { 00:15:42.617 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.617 "subtype": "Discovery", 00:15:42.617 "listen_addresses": [], 00:15:42.617 "allow_any_host": true, 00:15:42.617 "hosts": [] 00:15:42.617 }, 00:15:42.617 { 00:15:42.617 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.617 "subtype": "NVMe", 00:15:42.617 "listen_addresses": [ 00:15:42.617 { 00:15:42.617 "trtype": "VFIOUSER", 00:15:42.617 "adrfam": "IPv4", 00:15:42.617 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.617 "trsvcid": "0" 00:15:42.617 } 00:15:42.617 ], 00:15:42.617 "allow_any_host": true, 00:15:42.617 "hosts": [], 00:15:42.617 "serial_number": "SPDK1", 00:15:42.617 "model_number": "SPDK bdev Controller", 00:15:42.617 "max_namespaces": 32, 00:15:42.617 "min_cntlid": 1, 00:15:42.617 "max_cntlid": 65519, 00:15:42.617 "namespaces": [ 00:15:42.617 { 00:15:42.617 "nsid": 1, 00:15:42.617 "bdev_name": "Malloc1", 00:15:42.617 "name": "Malloc1", 00:15:42.617 "nguid": "DA620DFE91D74930A3F4D7133A1DA4DE", 00:15:42.617 "uuid": "da620dfe-91d7-4930-a3f4-d7133a1da4de" 00:15:42.617 } 00:15:42.617 ] 00:15:42.617 }, 00:15:42.617 { 00:15:42.617 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.617 "subtype": "NVMe", 00:15:42.617 "listen_addresses": [ 00:15:42.617 { 00:15:42.617 "trtype": "VFIOUSER", 00:15:42.617 "adrfam": "IPv4", 00:15:42.617 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.617 "trsvcid": "0" 00:15:42.617 } 00:15:42.617 ], 00:15:42.617 "allow_any_host": true, 00:15:42.617 "hosts": [], 00:15:42.617 "serial_number": "SPDK2", 00:15:42.617 "model_number": "SPDK bdev Controller", 00:15:42.617 "max_namespaces": 32, 00:15:42.617 "min_cntlid": 1, 00:15:42.617 "max_cntlid": 65519, 00:15:42.617 "namespaces": [ 00:15:42.617 { 00:15:42.617 "nsid": 1, 00:15:42.617 "bdev_name": "Malloc2", 00:15:42.617 "name": "Malloc2", 00:15:42.617 "nguid": "063E4BBD660541B4BF781DAB16B71CFD", 00:15:42.617 "uuid": "063e4bbd-6605-41b4-bf78-1dab16b71cfd" 00:15:42.617 } 00:15:42.617 ] 00:15:42.617 } 00:15:42.617 ] 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1942644 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:42.617 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:42.617 [2024-11-19 18:14:44.029590] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:42.617 Malloc3 00:15:42.617 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:42.878 [2024-11-19 18:14:44.223920] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.878 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.878 Asynchronous Event Request test 00:15:42.878 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.878 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.878 Registering asynchronous event callbacks... 00:15:42.878 Starting namespace attribute notice tests for all controllers... 00:15:42.878 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:42.878 aer_cb - Changed Namespace 00:15:42.878 Cleaning up... 00:15:43.141 [ 00:15:43.141 { 00:15:43.141 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.141 "subtype": "Discovery", 00:15:43.141 "listen_addresses": [], 00:15:43.141 "allow_any_host": true, 00:15:43.141 "hosts": [] 00:15:43.141 }, 00:15:43.141 { 00:15:43.141 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.141 "subtype": "NVMe", 00:15:43.141 "listen_addresses": [ 00:15:43.141 { 00:15:43.141 "trtype": "VFIOUSER", 00:15:43.141 "adrfam": "IPv4", 00:15:43.141 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.141 "trsvcid": "0" 00:15:43.141 } 00:15:43.141 ], 00:15:43.141 "allow_any_host": true, 00:15:43.141 "hosts": [], 00:15:43.141 "serial_number": "SPDK1", 00:15:43.141 "model_number": "SPDK bdev Controller", 00:15:43.141 "max_namespaces": 32, 00:15:43.141 "min_cntlid": 1, 00:15:43.141 "max_cntlid": 65519, 00:15:43.141 "namespaces": [ 00:15:43.141 { 00:15:43.141 "nsid": 1, 00:15:43.141 "bdev_name": "Malloc1", 00:15:43.141 "name": "Malloc1", 00:15:43.141 "nguid": "DA620DFE91D74930A3F4D7133A1DA4DE", 00:15:43.141 "uuid": "da620dfe-91d7-4930-a3f4-d7133a1da4de" 00:15:43.141 }, 00:15:43.141 { 00:15:43.141 "nsid": 2, 00:15:43.141 "bdev_name": "Malloc3", 00:15:43.141 "name": "Malloc3", 00:15:43.141 "nguid": "0E1A718CE7B646A394FC32248FB9D2BA", 00:15:43.141 "uuid": "0e1a718c-e7b6-46a3-94fc-32248fb9d2ba" 00:15:43.141 } 00:15:43.141 ] 00:15:43.141 }, 00:15:43.141 { 00:15:43.141 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.141 "subtype": "NVMe", 00:15:43.141 "listen_addresses": [ 00:15:43.141 { 00:15:43.141 "trtype": "VFIOUSER", 00:15:43.141 "adrfam": "IPv4", 00:15:43.141 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.141 "trsvcid": "0" 00:15:43.141 } 00:15:43.141 ], 00:15:43.141 "allow_any_host": true, 00:15:43.141 "hosts": [], 00:15:43.141 "serial_number": "SPDK2", 00:15:43.141 "model_number": "SPDK bdev Controller", 00:15:43.141 "max_namespaces": 32, 00:15:43.141 "min_cntlid": 1, 00:15:43.141 "max_cntlid": 65519, 00:15:43.141 "namespaces": [ 00:15:43.141 { 00:15:43.141 "nsid": 1, 00:15:43.141 "bdev_name": "Malloc2", 00:15:43.141 "name": "Malloc2", 00:15:43.141 "nguid": "063E4BBD660541B4BF781DAB16B71CFD", 00:15:43.141 "uuid": "063e4bbd-6605-41b4-bf78-1dab16b71cfd" 00:15:43.141 } 00:15:43.141 ] 00:15:43.141 } 00:15:43.141 ] 00:15:43.141 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1942644 00:15:43.141 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.141 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.141 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.141 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:43.141 [2024-11-19 18:14:44.452133] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:15:43.141 [2024-11-19 18:14:44.452186] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942709 ] 00:15:43.141 [2024-11-19 18:14:44.490420] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:43.141 [2024-11-19 18:14:44.499383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.141 [2024-11-19 18:14:44.499404] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f41c97a3000 00:15:43.141 [2024-11-19 18:14:44.500386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.141 [2024-11-19 18:14:44.501390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.141 [2024-11-19 18:14:44.502392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.141 [2024-11-19 18:14:44.503396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.141 [2024-11-19 18:14:44.504404] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.141 [2024-11-19 18:14:44.505413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.142 [2024-11-19 18:14:44.506423] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.142 [2024-11-19 18:14:44.507429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.142 [2024-11-19 18:14:44.508439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.142 [2024-11-19 18:14:44.508448] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f41c9798000 00:15:43.142 [2024-11-19 18:14:44.509362] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.142 [2024-11-19 18:14:44.522751] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:43.142 [2024-11-19 18:14:44.522771] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:43.142 [2024-11-19 18:14:44.524816] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.142 [2024-11-19 18:14:44.524849] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:43.142 [2024-11-19 18:14:44.524912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:43.142 [2024-11-19 18:14:44.524922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:43.142 [2024-11-19 18:14:44.524926] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:43.142 [2024-11-19 18:14:44.525820] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:43.142 [2024-11-19 18:14:44.525828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:43.142 [2024-11-19 18:14:44.525834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:43.142 [2024-11-19 18:14:44.526823] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.142 [2024-11-19 18:14:44.526830] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:43.142 [2024-11-19 18:14:44.526836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:43.142 [2024-11-19 18:14:44.527829] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:43.142 [2024-11-19 18:14:44.527836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:43.142 [2024-11-19 18:14:44.528837] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:43.142 [2024-11-19 18:14:44.528844] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:43.142 [2024-11-19 18:14:44.528848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:43.142 [2024-11-19 18:14:44.528853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:43.142 [2024-11-19 18:14:44.528959] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:43.142 [2024-11-19 18:14:44.528962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:43.142 [2024-11-19 18:14:44.528968] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:43.142 [2024-11-19 18:14:44.529842] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:43.142 [2024-11-19 18:14:44.530852] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:43.142 [2024-11-19 18:14:44.531856] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.142 [2024-11-19 18:14:44.532859] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.142 [2024-11-19 18:14:44.532891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:43.142 [2024-11-19 18:14:44.533874] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:43.142 [2024-11-19 18:14:44.533881] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:43.142 [2024-11-19 18:14:44.533885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.533900] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:43.142 [2024-11-19 18:14:44.533905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.533914] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.142 [2024-11-19 18:14:44.533918] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.142 [2024-11-19 18:14:44.533921] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.142 [2024-11-19 18:14:44.533931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.142 [2024-11-19 18:14:44.540166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:43.142 [2024-11-19 18:14:44.540176] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:43.142 [2024-11-19 18:14:44.540180] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:43.142 [2024-11-19 18:14:44.540183] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:43.142 [2024-11-19 18:14:44.540187] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:43.142 [2024-11-19 18:14:44.540192] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:43.142 [2024-11-19 18:14:44.540196] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:43.142 [2024-11-19 18:14:44.540199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.540206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.540214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:43.142 [2024-11-19 18:14:44.548166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:43.142 [2024-11-19 18:14:44.548178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.142 [2024-11-19 18:14:44.548185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.142 [2024-11-19 18:14:44.548191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.142 [2024-11-19 18:14:44.548197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.142 [2024-11-19 18:14:44.548200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.548205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.548212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:43.142 [2024-11-19 18:14:44.556164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:43.142 [2024-11-19 18:14:44.556172] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:43.142 [2024-11-19 18:14:44.556177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.556182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.556186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.556193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.142 [2024-11-19 18:14:44.564163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:43.142 [2024-11-19 18:14:44.564210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.564216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.564221] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:43.142 [2024-11-19 18:14:44.564225] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:43.142 [2024-11-19 18:14:44.564227] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.142 [2024-11-19 18:14:44.564232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:43.142 [2024-11-19 18:14:44.572165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:43.142 [2024-11-19 18:14:44.572173] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:43.142 [2024-11-19 18:14:44.572182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.572187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.572194] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.142 [2024-11-19 18:14:44.572197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.142 [2024-11-19 18:14:44.572199] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.142 [2024-11-19 18:14:44.572204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.142 [2024-11-19 18:14:44.580164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:43.142 [2024-11-19 18:14:44.580177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.580183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.580188] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.142 [2024-11-19 18:14:44.580191] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.142 [2024-11-19 18:14:44.580193] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.142 [2024-11-19 18:14:44.580198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.142 [2024-11-19 18:14:44.588164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:43.142 [2024-11-19 18:14:44.588172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.588177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:43.142 [2024-11-19 18:14:44.588183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:43.143 [2024-11-19 18:14:44.588188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:43.143 [2024-11-19 18:14:44.588192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:43.143 [2024-11-19 18:14:44.588196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:43.143 [2024-11-19 18:14:44.588199] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:43.143 [2024-11-19 18:14:44.588203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:43.143 [2024-11-19 18:14:44.588206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:43.143 [2024-11-19 18:14:44.588219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:43.143 [2024-11-19 18:14:44.596164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:43.143 [2024-11-19 18:14:44.596176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:43.143 [2024-11-19 18:14:44.604166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:43.143 [2024-11-19 18:14:44.604176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:43.404 [2024-11-19 18:14:44.612165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:43.404 [2024-11-19 18:14:44.612176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.404 [2024-11-19 18:14:44.620164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:43.404 [2024-11-19 18:14:44.620176] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:43.404 [2024-11-19 18:14:44.620180] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:43.404 [2024-11-19 18:14:44.620182] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:43.404 [2024-11-19 18:14:44.620185] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:43.404 [2024-11-19 18:14:44.620187] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:43.404 [2024-11-19 18:14:44.620192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:43.404 [2024-11-19 18:14:44.620198] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:43.404 [2024-11-19 18:14:44.620201] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:43.404 [2024-11-19 18:14:44.620203] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.404 [2024-11-19 18:14:44.620208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:43.404 [2024-11-19 18:14:44.620213] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:43.404 [2024-11-19 18:14:44.620216] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.404 [2024-11-19 18:14:44.620218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.404 [2024-11-19 18:14:44.620223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.404 [2024-11-19 18:14:44.620228] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:43.404 [2024-11-19 18:14:44.620231] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:43.404 [2024-11-19 18:14:44.620234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:43.404 [2024-11-19 18:14:44.620238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:43.404 [2024-11-19 18:14:44.628165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:43.405 [2024-11-19 18:14:44.628176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:43.405 [2024-11-19 18:14:44.628184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:43.405 [2024-11-19 18:14:44.628189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:43.405 ===================================================== 00:15:43.405 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:43.405 ===================================================== 00:15:43.405 Controller Capabilities/Features 00:15:43.405 ================================ 00:15:43.405 Vendor ID: 4e58 00:15:43.405 Subsystem Vendor ID: 4e58 00:15:43.405 Serial Number: SPDK2 00:15:43.405 Model Number: SPDK bdev Controller 00:15:43.405 Firmware Version: 25.01 00:15:43.405 Recommended Arb Burst: 6 00:15:43.405 IEEE OUI Identifier: 8d 6b 50 00:15:43.405 Multi-path I/O 00:15:43.405 May have multiple subsystem ports: Yes 00:15:43.405 May have multiple controllers: Yes 00:15:43.405 Associated with SR-IOV VF: No 00:15:43.405 Max Data Transfer Size: 131072 00:15:43.405 Max Number of Namespaces: 32 00:15:43.405 Max Number of I/O Queues: 127 00:15:43.405 NVMe Specification Version (VS): 1.3 00:15:43.405 NVMe Specification Version (Identify): 1.3 00:15:43.405 Maximum Queue Entries: 256 00:15:43.405 Contiguous Queues Required: Yes 00:15:43.405 Arbitration Mechanisms Supported 00:15:43.405 Weighted Round Robin: Not Supported 00:15:43.405 Vendor Specific: Not Supported 00:15:43.405 Reset Timeout: 15000 ms 00:15:43.405 Doorbell Stride: 4 bytes 00:15:43.405 NVM Subsystem Reset: Not Supported 00:15:43.405 Command Sets Supported 00:15:43.405 NVM Command Set: Supported 00:15:43.405 Boot Partition: Not Supported 00:15:43.405 Memory Page Size Minimum: 4096 bytes 00:15:43.405 Memory Page Size Maximum: 4096 bytes 00:15:43.405 Persistent Memory Region: Not Supported 00:15:43.405 Optional Asynchronous Events Supported 00:15:43.405 Namespace Attribute Notices: Supported 00:15:43.405 Firmware Activation Notices: Not Supported 00:15:43.405 ANA Change Notices: Not Supported 00:15:43.405 PLE Aggregate Log Change Notices: Not Supported 00:15:43.405 LBA Status Info Alert Notices: Not Supported 00:15:43.405 EGE Aggregate Log Change Notices: Not Supported 00:15:43.405 Normal NVM Subsystem Shutdown event: Not Supported 00:15:43.405 Zone Descriptor Change Notices: Not Supported 00:15:43.405 Discovery Log Change Notices: Not Supported 00:15:43.405 Controller Attributes 00:15:43.405 128-bit Host Identifier: Supported 00:15:43.405 Non-Operational Permissive Mode: Not Supported 00:15:43.405 NVM Sets: Not Supported 00:15:43.405 Read Recovery Levels: Not Supported 00:15:43.405 Endurance Groups: Not Supported 00:15:43.405 Predictable Latency Mode: Not Supported 00:15:43.405 Traffic Based Keep ALive: Not Supported 00:15:43.405 Namespace Granularity: Not Supported 00:15:43.405 SQ Associations: Not Supported 00:15:43.405 UUID List: Not Supported 00:15:43.405 Multi-Domain Subsystem: Not Supported 00:15:43.405 Fixed Capacity Management: Not Supported 00:15:43.405 Variable Capacity Management: Not Supported 00:15:43.405 Delete Endurance Group: Not Supported 00:15:43.405 Delete NVM Set: Not Supported 00:15:43.405 Extended LBA Formats Supported: Not Supported 00:15:43.405 Flexible Data Placement Supported: Not Supported 00:15:43.405 00:15:43.405 Controller Memory Buffer Support 00:15:43.405 ================================ 00:15:43.405 Supported: No 00:15:43.405 00:15:43.405 Persistent Memory Region Support 00:15:43.405 ================================ 00:15:43.405 Supported: No 00:15:43.405 00:15:43.405 Admin Command Set Attributes 00:15:43.405 ============================ 00:15:43.405 Security Send/Receive: Not Supported 00:15:43.405 Format NVM: Not Supported 00:15:43.405 Firmware Activate/Download: Not Supported 00:15:43.405 Namespace Management: Not Supported 00:15:43.405 Device Self-Test: Not Supported 00:15:43.405 Directives: Not Supported 00:15:43.405 NVMe-MI: Not Supported 00:15:43.405 Virtualization Management: Not Supported 00:15:43.405 Doorbell Buffer Config: Not Supported 00:15:43.405 Get LBA Status Capability: Not Supported 00:15:43.405 Command & Feature Lockdown Capability: Not Supported 00:15:43.405 Abort Command Limit: 4 00:15:43.405 Async Event Request Limit: 4 00:15:43.405 Number of Firmware Slots: N/A 00:15:43.405 Firmware Slot 1 Read-Only: N/A 00:15:43.405 Firmware Activation Without Reset: N/A 00:15:43.405 Multiple Update Detection Support: N/A 00:15:43.405 Firmware Update Granularity: No Information Provided 00:15:43.405 Per-Namespace SMART Log: No 00:15:43.405 Asymmetric Namespace Access Log Page: Not Supported 00:15:43.405 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:43.405 Command Effects Log Page: Supported 00:15:43.405 Get Log Page Extended Data: Supported 00:15:43.405 Telemetry Log Pages: Not Supported 00:15:43.405 Persistent Event Log Pages: Not Supported 00:15:43.405 Supported Log Pages Log Page: May Support 00:15:43.405 Commands Supported & Effects Log Page: Not Supported 00:15:43.405 Feature Identifiers & Effects Log Page:May Support 00:15:43.405 NVMe-MI Commands & Effects Log Page: May Support 00:15:43.405 Data Area 4 for Telemetry Log: Not Supported 00:15:43.405 Error Log Page Entries Supported: 128 00:15:43.405 Keep Alive: Supported 00:15:43.405 Keep Alive Granularity: 10000 ms 00:15:43.405 00:15:43.405 NVM Command Set Attributes 00:15:43.405 ========================== 00:15:43.405 Submission Queue Entry Size 00:15:43.405 Max: 64 00:15:43.405 Min: 64 00:15:43.405 Completion Queue Entry Size 00:15:43.405 Max: 16 00:15:43.405 Min: 16 00:15:43.405 Number of Namespaces: 32 00:15:43.405 Compare Command: Supported 00:15:43.405 Write Uncorrectable Command: Not Supported 00:15:43.405 Dataset Management Command: Supported 00:15:43.405 Write Zeroes Command: Supported 00:15:43.405 Set Features Save Field: Not Supported 00:15:43.405 Reservations: Not Supported 00:15:43.405 Timestamp: Not Supported 00:15:43.405 Copy: Supported 00:15:43.405 Volatile Write Cache: Present 00:15:43.405 Atomic Write Unit (Normal): 1 00:15:43.405 Atomic Write Unit (PFail): 1 00:15:43.405 Atomic Compare & Write Unit: 1 00:15:43.405 Fused Compare & Write: Supported 00:15:43.405 Scatter-Gather List 00:15:43.405 SGL Command Set: Supported (Dword aligned) 00:15:43.405 SGL Keyed: Not Supported 00:15:43.405 SGL Bit Bucket Descriptor: Not Supported 00:15:43.405 SGL Metadata Pointer: Not Supported 00:15:43.405 Oversized SGL: Not Supported 00:15:43.405 SGL Metadata Address: Not Supported 00:15:43.405 SGL Offset: Not Supported 00:15:43.405 Transport SGL Data Block: Not Supported 00:15:43.405 Replay Protected Memory Block: Not Supported 00:15:43.405 00:15:43.405 Firmware Slot Information 00:15:43.405 ========================= 00:15:43.405 Active slot: 1 00:15:43.405 Slot 1 Firmware Revision: 25.01 00:15:43.405 00:15:43.405 00:15:43.405 Commands Supported and Effects 00:15:43.405 ============================== 00:15:43.405 Admin Commands 00:15:43.405 -------------- 00:15:43.405 Get Log Page (02h): Supported 00:15:43.405 Identify (06h): Supported 00:15:43.405 Abort (08h): Supported 00:15:43.405 Set Features (09h): Supported 00:15:43.405 Get Features (0Ah): Supported 00:15:43.405 Asynchronous Event Request (0Ch): Supported 00:15:43.405 Keep Alive (18h): Supported 00:15:43.405 I/O Commands 00:15:43.405 ------------ 00:15:43.405 Flush (00h): Supported LBA-Change 00:15:43.405 Write (01h): Supported LBA-Change 00:15:43.405 Read (02h): Supported 00:15:43.405 Compare (05h): Supported 00:15:43.405 Write Zeroes (08h): Supported LBA-Change 00:15:43.405 Dataset Management (09h): Supported LBA-Change 00:15:43.405 Copy (19h): Supported LBA-Change 00:15:43.405 00:15:43.405 Error Log 00:15:43.405 ========= 00:15:43.405 00:15:43.405 Arbitration 00:15:43.405 =========== 00:15:43.405 Arbitration Burst: 1 00:15:43.405 00:15:43.405 Power Management 00:15:43.405 ================ 00:15:43.405 Number of Power States: 1 00:15:43.405 Current Power State: Power State #0 00:15:43.405 Power State #0: 00:15:43.405 Max Power: 0.00 W 00:15:43.405 Non-Operational State: Operational 00:15:43.405 Entry Latency: Not Reported 00:15:43.405 Exit Latency: Not Reported 00:15:43.405 Relative Read Throughput: 0 00:15:43.405 Relative Read Latency: 0 00:15:43.405 Relative Write Throughput: 0 00:15:43.405 Relative Write Latency: 0 00:15:43.405 Idle Power: Not Reported 00:15:43.405 Active Power: Not Reported 00:15:43.405 Non-Operational Permissive Mode: Not Supported 00:15:43.405 00:15:43.405 Health Information 00:15:43.406 ================== 00:15:43.406 Critical Warnings: 00:15:43.406 Available Spare Space: OK 00:15:43.406 Temperature: OK 00:15:43.406 Device Reliability: OK 00:15:43.406 Read Only: No 00:15:43.406 Volatile Memory Backup: OK 00:15:43.406 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:43.406 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:43.406 Available Spare: 0% 00:15:43.406 Available Sp[2024-11-19 18:14:44.628261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:43.406 [2024-11-19 18:14:44.636165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:43.406 [2024-11-19 18:14:44.636189] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:43.406 [2024-11-19 18:14:44.636196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.406 [2024-11-19 18:14:44.636202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.406 [2024-11-19 18:14:44.636206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.406 [2024-11-19 18:14:44.636210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.406 [2024-11-19 18:14:44.636249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.406 [2024-11-19 18:14:44.636257] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:43.406 [2024-11-19 18:14:44.637254] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.406 [2024-11-19 18:14:44.637292] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:43.406 [2024-11-19 18:14:44.637298] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:43.406 [2024-11-19 18:14:44.638253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:43.406 [2024-11-19 18:14:44.638261] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:43.406 [2024-11-19 18:14:44.638303] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:43.406 [2024-11-19 18:14:44.641167] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.406 are Threshold: 0% 00:15:43.406 Life Percentage Used: 0% 00:15:43.406 Data Units Read: 0 00:15:43.406 Data Units Written: 0 00:15:43.406 Host Read Commands: 0 00:15:43.406 Host Write Commands: 0 00:15:43.406 Controller Busy Time: 0 minutes 00:15:43.406 Power Cycles: 0 00:15:43.406 Power On Hours: 0 hours 00:15:43.406 Unsafe Shutdowns: 0 00:15:43.406 Unrecoverable Media Errors: 0 00:15:43.406 Lifetime Error Log Entries: 0 00:15:43.406 Warning Temperature Time: 0 minutes 00:15:43.406 Critical Temperature Time: 0 minutes 00:15:43.406 00:15:43.406 Number of Queues 00:15:43.406 ================ 00:15:43.406 Number of I/O Submission Queues: 127 00:15:43.406 Number of I/O Completion Queues: 127 00:15:43.406 00:15:43.406 Active Namespaces 00:15:43.406 ================= 00:15:43.406 Namespace ID:1 00:15:43.406 Error Recovery Timeout: Unlimited 00:15:43.406 Command Set Identifier: NVM (00h) 00:15:43.406 Deallocate: Supported 00:15:43.406 Deallocated/Unwritten Error: Not Supported 00:15:43.406 Deallocated Read Value: Unknown 00:15:43.406 Deallocate in Write Zeroes: Not Supported 00:15:43.406 Deallocated Guard Field: 0xFFFF 00:15:43.406 Flush: Supported 00:15:43.406 Reservation: Supported 00:15:43.406 Namespace Sharing Capabilities: Multiple Controllers 00:15:43.406 Size (in LBAs): 131072 (0GiB) 00:15:43.406 Capacity (in LBAs): 131072 (0GiB) 00:15:43.406 Utilization (in LBAs): 131072 (0GiB) 00:15:43.406 NGUID: 063E4BBD660541B4BF781DAB16B71CFD 00:15:43.406 UUID: 063e4bbd-6605-41b4-bf78-1dab16b71cfd 00:15:43.406 Thin Provisioning: Not Supported 00:15:43.406 Per-NS Atomic Units: Yes 00:15:43.406 Atomic Boundary Size (Normal): 0 00:15:43.406 Atomic Boundary Size (PFail): 0 00:15:43.406 Atomic Boundary Offset: 0 00:15:43.406 Maximum Single Source Range Length: 65535 00:15:43.406 Maximum Copy Length: 65535 00:15:43.406 Maximum Source Range Count: 1 00:15:43.406 NGUID/EUI64 Never Reused: No 00:15:43.406 Namespace Write Protected: No 00:15:43.406 Number of LBA Formats: 1 00:15:43.406 Current LBA Format: LBA Format #00 00:15:43.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:43.406 00:15:43.406 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:43.406 [2024-11-19 18:14:44.831226] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:48.694 Initializing NVMe Controllers 00:15:48.694 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:48.694 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:48.694 Initialization complete. Launching workers. 00:15:48.694 ======================================================== 00:15:48.694 Latency(us) 00:15:48.694 Device Information : IOPS MiB/s Average min max 00:15:48.694 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40001.14 156.25 3199.58 843.73 10779.96 00:15:48.694 ======================================================== 00:15:48.694 Total : 40001.14 156.25 3199.58 843.73 10779.96 00:15:48.694 00:15:48.694 [2024-11-19 18:14:49.940361] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:48.694 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:48.694 [2024-11-19 18:14:50.130959] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:53.987 Initializing NVMe Controllers 00:15:53.987 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:53.987 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:53.987 Initialization complete. Launching workers. 00:15:53.987 ======================================================== 00:15:53.987 Latency(us) 00:15:53.987 Device Information : IOPS MiB/s Average min max 00:15:53.987 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39956.83 156.08 3203.33 848.26 8774.83 00:15:53.987 ======================================================== 00:15:53.987 Total : 39956.83 156.08 3203.33 848.26 8774.83 00:15:53.987 00:15:53.987 [2024-11-19 18:14:55.150720] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:53.987 18:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:53.987 [2024-11-19 18:14:55.351874] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:59.279 [2024-11-19 18:15:00.497251] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.279 Initializing NVMe Controllers 00:15:59.279 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:59.279 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:59.279 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:59.279 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:59.279 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:59.280 Initialization complete. Launching workers. 00:15:59.280 Starting thread on core 2 00:15:59.280 Starting thread on core 3 00:15:59.280 Starting thread on core 1 00:15:59.280 18:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:59.540 [2024-11-19 18:15:00.758555] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:02.840 [2024-11-19 18:15:03.811311] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:02.840 Initializing NVMe Controllers 00:16:02.840 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.840 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.840 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:02.840 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:02.840 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:02.840 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:02.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:02.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:02.840 Initialization complete. Launching workers. 00:16:02.840 Starting thread on core 1 with urgent priority queue 00:16:02.840 Starting thread on core 2 with urgent priority queue 00:16:02.840 Starting thread on core 3 with urgent priority queue 00:16:02.840 Starting thread on core 0 with urgent priority queue 00:16:02.840 SPDK bdev Controller (SPDK2 ) core 0: 16165.67 IO/s 6.19 secs/100000 ios 00:16:02.840 SPDK bdev Controller (SPDK2 ) core 1: 9274.67 IO/s 10.78 secs/100000 ios 00:16:02.840 SPDK bdev Controller (SPDK2 ) core 2: 7869.00 IO/s 12.71 secs/100000 ios 00:16:02.840 SPDK bdev Controller (SPDK2 ) core 3: 13168.00 IO/s 7.59 secs/100000 ios 00:16:02.841 ======================================================== 00:16:02.841 00:16:02.841 18:15:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.841 [2024-11-19 18:15:04.047533] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:02.841 Initializing NVMe Controllers 00:16:02.841 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.841 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.841 Namespace ID: 1 size: 0GB 00:16:02.841 Initialization complete. 00:16:02.841 INFO: using host memory buffer for IO 00:16:02.841 Hello world! 00:16:02.841 [2024-11-19 18:15:04.057602] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:02.841 18:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.841 [2024-11-19 18:15:04.295813] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.226 Initializing NVMe Controllers 00:16:04.226 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.226 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.226 Initialization complete. Launching workers. 00:16:04.226 submit (in ns) avg, min, max = 6728.1, 2815.8, 3999480.8 00:16:04.226 complete (in ns) avg, min, max = 14983.6, 1626.7, 3998292.5 00:16:04.226 00:16:04.226 Submit histogram 00:16:04.226 ================ 00:16:04.226 Range in us Cumulative Count 00:16:04.226 2.813 - 2.827: 0.3369% ( 68) 00:16:04.226 2.827 - 2.840: 1.6499% ( 265) 00:16:04.226 2.840 - 2.853: 4.0480% ( 484) 00:16:04.226 2.853 - 2.867: 9.5476% ( 1110) 00:16:04.226 2.867 - 2.880: 14.6955% ( 1039) 00:16:04.226 2.880 - 2.893: 19.9128% ( 1053) 00:16:04.226 2.893 - 2.907: 25.3035% ( 1088) 00:16:04.226 2.907 - 2.920: 30.8180% ( 1113) 00:16:04.226 2.920 - 2.933: 36.1393% ( 1074) 00:16:04.226 2.933 - 2.947: 40.9652% ( 974) 00:16:04.226 2.947 - 2.960: 46.5094% ( 1119) 00:16:04.226 2.960 - 2.973: 52.4451% ( 1198) 00:16:04.226 2.973 - 2.987: 60.7442% ( 1675) 00:16:04.226 2.987 - 3.000: 69.5338% ( 1774) 00:16:04.226 3.000 - 3.013: 78.3531% ( 1780) 00:16:04.226 3.013 - 3.027: 84.9725% ( 1336) 00:16:04.226 3.027 - 3.040: 91.0866% ( 1234) 00:16:04.226 3.040 - 3.053: 95.1345% ( 817) 00:16:04.226 3.053 - 3.067: 97.5128% ( 480) 00:16:04.226 3.067 - 3.080: 98.6375% ( 227) 00:16:04.226 3.080 - 3.093: 99.1379% ( 101) 00:16:04.226 3.093 - 3.107: 99.3262% ( 38) 00:16:04.226 3.107 - 3.120: 99.4302% ( 21) 00:16:04.226 3.120 - 3.133: 99.4897% ( 12) 00:16:04.226 3.133 - 3.147: 99.5244% ( 7) 00:16:04.226 3.147 - 3.160: 99.5343% ( 2) 00:16:04.226 3.160 - 3.173: 99.5442% ( 2) 00:16:04.226 3.173 - 3.187: 99.5491% ( 1) 00:16:04.226 3.187 - 3.200: 99.5541% ( 1) 00:16:04.226 3.373 - 3.387: 99.5590% ( 1) 00:16:04.226 3.440 - 3.467: 99.5640% ( 1) 00:16:04.226 3.653 - 3.680: 99.5689% ( 1) 00:16:04.226 3.893 - 3.920: 99.5739% ( 1) 00:16:04.226 3.920 - 3.947: 99.5789% ( 1) 00:16:04.226 4.000 - 4.027: 99.5838% ( 1) 00:16:04.226 4.347 - 4.373: 99.5888% ( 1) 00:16:04.226 4.373 - 4.400: 99.5937% ( 1) 00:16:04.226 4.640 - 4.667: 99.5987% ( 1) 00:16:04.226 4.693 - 4.720: 99.6036% ( 1) 00:16:04.226 4.747 - 4.773: 99.6086% ( 1) 00:16:04.226 4.800 - 4.827: 99.6185% ( 2) 00:16:04.226 4.880 - 4.907: 99.6234% ( 1) 00:16:04.227 4.960 - 4.987: 99.6284% ( 1) 00:16:04.227 4.987 - 5.013: 99.6334% ( 1) 00:16:04.227 5.013 - 5.040: 99.6433% ( 2) 00:16:04.227 5.040 - 5.067: 99.6482% ( 1) 00:16:04.227 5.067 - 5.093: 99.6581% ( 2) 00:16:04.227 5.093 - 5.120: 99.6680% ( 2) 00:16:04.227 5.147 - 5.173: 99.6730% ( 1) 00:16:04.227 5.200 - 5.227: 99.6779% ( 1) 00:16:04.227 5.253 - 5.280: 99.6829% ( 1) 00:16:04.227 5.280 - 5.307: 99.6879% ( 1) 00:16:04.227 5.307 - 5.333: 99.6928% ( 1) 00:16:04.227 5.333 - 5.360: 99.7126% ( 4) 00:16:04.227 5.413 - 5.440: 99.7176% ( 1) 00:16:04.227 5.440 - 5.467: 99.7225% ( 1) 00:16:04.227 5.467 - 5.493: 99.7473% ( 5) 00:16:04.227 5.573 - 5.600: 99.7622% ( 3) 00:16:04.227 5.600 - 5.627: 99.7671% ( 1) 00:16:04.227 5.653 - 5.680: 99.7721% ( 1) 00:16:04.227 5.680 - 5.707: 99.7770% ( 1) 00:16:04.227 5.760 - 5.787: 99.7820% ( 1) 00:16:04.227 5.787 - 5.813: 99.7869% ( 1) 00:16:04.227 5.840 - 5.867: 99.7969% ( 2) 00:16:04.227 5.867 - 5.893: 99.8018% ( 1) 00:16:04.227 5.893 - 5.920: 99.8068% ( 1) 00:16:04.227 5.920 - 5.947: 99.8117% ( 1) 00:16:04.227 5.947 - 5.973: 99.8167% ( 1) 00:16:04.227 6.027 - 6.053: 99.8216% ( 1) 00:16:04.227 6.080 - 6.107: 99.8266% ( 1) 00:16:04.227 6.320 - 6.347: 99.8365% ( 2) 00:16:04.227 6.347 - 6.373: 99.8415% ( 1) 00:16:04.227 6.453 - 6.480: 99.8464% ( 1) 00:16:04.227 6.533 - 6.560: 99.8514% ( 1) 00:16:04.227 6.560 - 6.587: 99.8563% ( 1) 00:16:04.227 6.587 - 6.613: 99.8613% ( 1) 00:16:04.227 6.720 - 6.747: 99.8662% ( 1) 00:16:04.227 6.747 - 6.773: 99.8712% ( 1) 00:16:04.227 6.773 - 6.800: 99.8761% ( 1) 00:16:04.227 [2024-11-19 18:15:05.390689] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.227 6.880 - 6.933: 99.8811% ( 1) 00:16:04.227 7.093 - 7.147: 99.8910% ( 2) 00:16:04.227 7.413 - 7.467: 99.8960% ( 1) 00:16:04.227 7.733 - 7.787: 99.9009% ( 1) 00:16:04.227 11.147 - 11.200: 99.9059% ( 1) 00:16:04.227 3986.773 - 4014.080: 100.0000% ( 19) 00:16:04.227 00:16:04.227 Complete histogram 00:16:04.227 ================== 00:16:04.227 Range in us Cumulative Count 00:16:04.227 1.627 - 1.633: 0.0050% ( 1) 00:16:04.227 1.633 - 1.640: 0.0297% ( 5) 00:16:04.227 1.640 - 1.647: 0.8175% ( 159) 00:16:04.227 1.647 - 1.653: 0.9166% ( 20) 00:16:04.227 1.653 - 1.660: 1.0157% ( 20) 00:16:04.227 1.660 - 1.667: 1.1445% ( 26) 00:16:04.227 1.667 - 1.673: 1.2188% ( 15) 00:16:04.227 1.673 - 1.680: 1.2436% ( 5) 00:16:04.227 1.680 - 1.687: 1.2585% ( 3) 00:16:04.227 1.687 - 1.693: 4.2115% ( 596) 00:16:04.227 1.693 - 1.700: 44.0668% ( 8044) 00:16:04.227 1.700 - 1.707: 51.8852% ( 1578) 00:16:04.227 1.707 - 1.720: 71.7138% ( 4002) 00:16:04.227 1.720 - 1.733: 81.7322% ( 2022) 00:16:04.227 1.733 - 1.747: 83.9122% ( 440) 00:16:04.227 1.747 - 1.760: 86.8206% ( 587) 00:16:04.227 1.760 - 1.773: 91.7703% ( 999) 00:16:04.227 1.773 - 1.787: 96.1800% ( 890) 00:16:04.227 1.787 - 1.800: 98.4789% ( 464) 00:16:04.227 1.800 - 1.813: 99.3311% ( 172) 00:16:04.227 1.813 - 1.827: 99.5095% ( 36) 00:16:04.227 1.827 - 1.840: 99.5194% ( 2) 00:16:04.227 1.840 - 1.853: 99.5244% ( 1) 00:16:04.227 1.853 - 1.867: 99.5293% ( 1) 00:16:04.227 1.880 - 1.893: 99.5343% ( 1) 00:16:04.227 1.893 - 1.907: 99.5392% ( 1) 00:16:04.227 1.907 - 1.920: 99.5442% ( 1) 00:16:04.227 3.760 - 3.787: 99.5491% ( 1) 00:16:04.227 3.840 - 3.867: 99.5541% ( 1) 00:16:04.227 3.973 - 4.000: 99.5640% ( 2) 00:16:04.227 4.053 - 4.080: 99.5689% ( 1) 00:16:04.227 4.160 - 4.187: 99.5739% ( 1) 00:16:04.227 4.240 - 4.267: 99.5789% ( 1) 00:16:04.227 4.267 - 4.293: 99.5838% ( 1) 00:16:04.227 4.373 - 4.400: 99.5888% ( 1) 00:16:04.227 4.533 - 4.560: 99.5937% ( 1) 00:16:04.227 4.640 - 4.667: 99.5987% ( 1) 00:16:04.227 5.173 - 5.200: 99.6036% ( 1) 00:16:04.227 5.253 - 5.280: 99.6086% ( 1) 00:16:04.227 5.387 - 5.413: 99.6185% ( 2) 00:16:04.227 5.600 - 5.627: 99.6234% ( 1) 00:16:04.227 5.627 - 5.653: 99.6284% ( 1) 00:16:04.227 5.760 - 5.787: 99.6334% ( 1) 00:16:04.227 5.973 - 6.000: 99.6383% ( 1) 00:16:04.227 6.293 - 6.320: 99.6433% ( 1) 00:16:04.227 6.480 - 6.507: 99.6482% ( 1) 00:16:04.227 8.800 - 8.853: 99.6532% ( 1) 00:16:04.227 12.800 - 12.853: 99.6581% ( 1) 00:16:04.227 33.707 - 33.920: 99.6631% ( 1) 00:16:04.227 44.373 - 44.587: 99.6680% ( 1) 00:16:04.227 3986.773 - 4014.080: 100.0000% ( 67) 00:16:04.227 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.227 [ 00:16:04.227 { 00:16:04.227 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.227 "subtype": "Discovery", 00:16:04.227 "listen_addresses": [], 00:16:04.227 "allow_any_host": true, 00:16:04.227 "hosts": [] 00:16:04.227 }, 00:16:04.227 { 00:16:04.227 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.227 "subtype": "NVMe", 00:16:04.227 "listen_addresses": [ 00:16:04.227 { 00:16:04.227 "trtype": "VFIOUSER", 00:16:04.227 "adrfam": "IPv4", 00:16:04.227 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.227 "trsvcid": "0" 00:16:04.227 } 00:16:04.227 ], 00:16:04.227 "allow_any_host": true, 00:16:04.227 "hosts": [], 00:16:04.227 "serial_number": "SPDK1", 00:16:04.227 "model_number": "SPDK bdev Controller", 00:16:04.227 "max_namespaces": 32, 00:16:04.227 "min_cntlid": 1, 00:16:04.227 "max_cntlid": 65519, 00:16:04.227 "namespaces": [ 00:16:04.227 { 00:16:04.227 "nsid": 1, 00:16:04.227 "bdev_name": "Malloc1", 00:16:04.227 "name": "Malloc1", 00:16:04.227 "nguid": "DA620DFE91D74930A3F4D7133A1DA4DE", 00:16:04.227 "uuid": "da620dfe-91d7-4930-a3f4-d7133a1da4de" 00:16:04.227 }, 00:16:04.227 { 00:16:04.227 "nsid": 2, 00:16:04.227 "bdev_name": "Malloc3", 00:16:04.227 "name": "Malloc3", 00:16:04.227 "nguid": "0E1A718CE7B646A394FC32248FB9D2BA", 00:16:04.227 "uuid": "0e1a718c-e7b6-46a3-94fc-32248fb9d2ba" 00:16:04.227 } 00:16:04.227 ] 00:16:04.227 }, 00:16:04.227 { 00:16:04.227 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.227 "subtype": "NVMe", 00:16:04.227 "listen_addresses": [ 00:16:04.227 { 00:16:04.227 "trtype": "VFIOUSER", 00:16:04.227 "adrfam": "IPv4", 00:16:04.227 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.227 "trsvcid": "0" 00:16:04.227 } 00:16:04.227 ], 00:16:04.227 "allow_any_host": true, 00:16:04.227 "hosts": [], 00:16:04.227 "serial_number": "SPDK2", 00:16:04.227 "model_number": "SPDK bdev Controller", 00:16:04.227 "max_namespaces": 32, 00:16:04.227 "min_cntlid": 1, 00:16:04.227 "max_cntlid": 65519, 00:16:04.227 "namespaces": [ 00:16:04.227 { 00:16:04.227 "nsid": 1, 00:16:04.227 "bdev_name": "Malloc2", 00:16:04.227 "name": "Malloc2", 00:16:04.227 "nguid": "063E4BBD660541B4BF781DAB16B71CFD", 00:16:04.227 "uuid": "063e4bbd-6605-41b4-bf78-1dab16b71cfd" 00:16:04.227 } 00:16:04.227 ] 00:16:04.227 } 00:16:04.227 ] 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1947002 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:04.227 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:04.488 [2024-11-19 18:15:05.773507] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.488 Malloc4 00:16:04.488 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:04.749 [2024-11-19 18:15:05.967831] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.749 18:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.749 Asynchronous Event Request test 00:16:04.749 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.749 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.749 Registering asynchronous event callbacks... 00:16:04.749 Starting namespace attribute notice tests for all controllers... 00:16:04.749 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:04.749 aer_cb - Changed Namespace 00:16:04.749 Cleaning up... 00:16:04.749 [ 00:16:04.749 { 00:16:04.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.749 "subtype": "Discovery", 00:16:04.749 "listen_addresses": [], 00:16:04.749 "allow_any_host": true, 00:16:04.749 "hosts": [] 00:16:04.749 }, 00:16:04.749 { 00:16:04.749 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.749 "subtype": "NVMe", 00:16:04.749 "listen_addresses": [ 00:16:04.749 { 00:16:04.749 "trtype": "VFIOUSER", 00:16:04.749 "adrfam": "IPv4", 00:16:04.749 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.749 "trsvcid": "0" 00:16:04.749 } 00:16:04.749 ], 00:16:04.749 "allow_any_host": true, 00:16:04.749 "hosts": [], 00:16:04.749 "serial_number": "SPDK1", 00:16:04.749 "model_number": "SPDK bdev Controller", 00:16:04.749 "max_namespaces": 32, 00:16:04.749 "min_cntlid": 1, 00:16:04.749 "max_cntlid": 65519, 00:16:04.749 "namespaces": [ 00:16:04.749 { 00:16:04.749 "nsid": 1, 00:16:04.749 "bdev_name": "Malloc1", 00:16:04.749 "name": "Malloc1", 00:16:04.749 "nguid": "DA620DFE91D74930A3F4D7133A1DA4DE", 00:16:04.749 "uuid": "da620dfe-91d7-4930-a3f4-d7133a1da4de" 00:16:04.749 }, 00:16:04.749 { 00:16:04.749 "nsid": 2, 00:16:04.749 "bdev_name": "Malloc3", 00:16:04.749 "name": "Malloc3", 00:16:04.749 "nguid": "0E1A718CE7B646A394FC32248FB9D2BA", 00:16:04.749 "uuid": "0e1a718c-e7b6-46a3-94fc-32248fb9d2ba" 00:16:04.749 } 00:16:04.749 ] 00:16:04.749 }, 00:16:04.749 { 00:16:04.749 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.749 "subtype": "NVMe", 00:16:04.749 "listen_addresses": [ 00:16:04.749 { 00:16:04.749 "trtype": "VFIOUSER", 00:16:04.749 "adrfam": "IPv4", 00:16:04.749 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.749 "trsvcid": "0" 00:16:04.749 } 00:16:04.749 ], 00:16:04.749 "allow_any_host": true, 00:16:04.749 "hosts": [], 00:16:04.749 "serial_number": "SPDK2", 00:16:04.749 "model_number": "SPDK bdev Controller", 00:16:04.749 "max_namespaces": 32, 00:16:04.749 "min_cntlid": 1, 00:16:04.749 "max_cntlid": 65519, 00:16:04.749 "namespaces": [ 00:16:04.749 { 00:16:04.749 "nsid": 1, 00:16:04.749 "bdev_name": "Malloc2", 00:16:04.749 "name": "Malloc2", 00:16:04.749 "nguid": "063E4BBD660541B4BF781DAB16B71CFD", 00:16:04.749 "uuid": "063e4bbd-6605-41b4-bf78-1dab16b71cfd" 00:16:04.749 }, 00:16:04.749 { 00:16:04.749 "nsid": 2, 00:16:04.749 "bdev_name": "Malloc4", 00:16:04.749 "name": "Malloc4", 00:16:04.749 "nguid": "67CBAECD0FA942BF80CFCB6701E97C77", 00:16:04.749 "uuid": "67cbaecd-0fa9-42bf-80cf-cb6701e97c77" 00:16:04.749 } 00:16:04.749 ] 00:16:04.749 } 00:16:04.749 ] 00:16:04.749 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1947002 00:16:04.749 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:04.749 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1937919 00:16:04.749 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1937919 ']' 00:16:04.749 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1937919 00:16:04.749 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:04.749 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.749 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1937919 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1937919' 00:16:05.011 killing process with pid 1937919 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1937919 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1937919 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1947119 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1947119' 00:16:05.011 Process pid: 1947119 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1947119 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1947119 ']' 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.011 18:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:05.011 [2024-11-19 18:15:06.442410] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:05.011 [2024-11-19 18:15:06.443353] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:16:05.011 [2024-11-19 18:15:06.443396] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.272 [2024-11-19 18:15:06.528996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.272 [2024-11-19 18:15:06.558763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.272 [2024-11-19 18:15:06.558794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.272 [2024-11-19 18:15:06.558801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.272 [2024-11-19 18:15:06.558806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.272 [2024-11-19 18:15:06.558811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.272 [2024-11-19 18:15:06.559994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.272 [2024-11-19 18:15:06.560143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.272 [2024-11-19 18:15:06.560302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.272 [2024-11-19 18:15:06.560304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.272 [2024-11-19 18:15:06.610697] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:05.272 [2024-11-19 18:15:06.611652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:05.272 [2024-11-19 18:15:06.612639] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:05.272 [2024-11-19 18:15:06.613204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:05.272 [2024-11-19 18:15:06.613205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:05.843 18:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.843 18:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:05.843 18:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:07.228 18:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:07.228 18:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:07.228 18:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:07.228 18:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.228 18:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:07.228 18:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:07.228 Malloc1 00:16:07.228 18:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:07.489 18:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:07.752 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:08.014 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:08.014 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:08.014 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:08.014 Malloc2 00:16:08.014 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:08.275 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:08.536 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:08.536 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:08.536 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1947119 00:16:08.536 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1947119 ']' 00:16:08.536 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1947119 00:16:08.536 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:08.536 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.536 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1947119 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1947119' 00:16:08.796 killing process with pid 1947119 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1947119 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1947119 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:08.796 00:16:08.796 real 0m51.204s 00:16:08.796 user 3m16.230s 00:16:08.796 sys 0m2.692s 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:08.796 ************************************ 00:16:08.796 END TEST nvmf_vfio_user 00:16:08.796 ************************************ 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.796 18:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.058 ************************************ 00:16:09.058 START TEST nvmf_vfio_user_nvme_compliance 00:16:09.058 ************************************ 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:09.058 * Looking for test storage... 00:16:09.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:09.058 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:09.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.059 --rc genhtml_branch_coverage=1 00:16:09.059 --rc genhtml_function_coverage=1 00:16:09.059 --rc genhtml_legend=1 00:16:09.059 --rc geninfo_all_blocks=1 00:16:09.059 --rc geninfo_unexecuted_blocks=1 00:16:09.059 00:16:09.059 ' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:09.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.059 --rc genhtml_branch_coverage=1 00:16:09.059 --rc genhtml_function_coverage=1 00:16:09.059 --rc genhtml_legend=1 00:16:09.059 --rc geninfo_all_blocks=1 00:16:09.059 --rc geninfo_unexecuted_blocks=1 00:16:09.059 00:16:09.059 ' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:09.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.059 --rc genhtml_branch_coverage=1 00:16:09.059 --rc genhtml_function_coverage=1 00:16:09.059 --rc genhtml_legend=1 00:16:09.059 --rc geninfo_all_blocks=1 00:16:09.059 --rc geninfo_unexecuted_blocks=1 00:16:09.059 00:16:09.059 ' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:09.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.059 --rc genhtml_branch_coverage=1 00:16:09.059 --rc genhtml_function_coverage=1 00:16:09.059 --rc genhtml_legend=1 00:16:09.059 --rc geninfo_all_blocks=1 00:16:09.059 --rc geninfo_unexecuted_blocks=1 00:16:09.059 00:16:09.059 ' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1948272 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1948272' 00:16:09.059 Process pid: 1948272 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1948272 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1948272 ']' 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.059 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:09.320 [2024-11-19 18:15:10.570187] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:16:09.320 [2024-11-19 18:15:10.570268] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.320 [2024-11-19 18:15:10.655494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.320 [2024-11-19 18:15:10.690500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.320 [2024-11-19 18:15:10.690533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.320 [2024-11-19 18:15:10.690539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.320 [2024-11-19 18:15:10.690544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.320 [2024-11-19 18:15:10.690548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.320 [2024-11-19 18:15:10.691764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.320 [2024-11-19 18:15:10.691885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.320 [2024-11-19 18:15:10.691888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.262 18:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.262 18:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:10.262 18:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:11.203 malloc0 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.203 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:11.203 00:16:11.203 00:16:11.203 CUnit - A unit testing framework for C - Version 2.1-3 00:16:11.203 http://cunit.sourceforge.net/ 00:16:11.203 00:16:11.203 00:16:11.203 Suite: nvme_compliance 00:16:11.203 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 18:15:12.624568] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.203 [2024-11-19 18:15:12.625877] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:11.203 [2024-11-19 18:15:12.625889] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:11.203 [2024-11-19 18:15:12.625894] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:11.203 [2024-11-19 18:15:12.627589] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.203 passed 00:16:11.465 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 18:15:12.702055] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.465 [2024-11-19 18:15:12.705084] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.465 passed 00:16:11.465 Test: admin_identify_ns ...[2024-11-19 18:15:12.781513] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.465 [2024-11-19 18:15:12.845165] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:11.465 [2024-11-19 18:15:12.853167] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:11.465 [2024-11-19 18:15:12.877247] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.465 passed 00:16:11.725 Test: admin_get_features_mandatory_features ...[2024-11-19 18:15:12.950436] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.725 [2024-11-19 18:15:12.953456] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.725 passed 00:16:11.725 Test: admin_get_features_optional_features ...[2024-11-19 18:15:13.028901] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.725 [2024-11-19 18:15:13.032921] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.725 passed 00:16:11.725 Test: admin_set_features_number_of_queues ...[2024-11-19 18:15:13.107626] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.986 [2024-11-19 18:15:13.212248] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.986 passed 00:16:11.986 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 18:15:13.287274] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.986 [2024-11-19 18:15:13.290299] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:11.986 passed 00:16:11.986 Test: admin_get_log_page_with_lpo ...[2024-11-19 18:15:13.368523] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:11.986 [2024-11-19 18:15:13.437171] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:11.986 [2024-11-19 18:15:13.450216] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.246 passed 00:16:12.246 Test: fabric_property_get ...[2024-11-19 18:15:13.523409] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.246 [2024-11-19 18:15:13.524616] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:12.246 [2024-11-19 18:15:13.526429] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.246 passed 00:16:12.246 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 18:15:13.604878] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.246 [2024-11-19 18:15:13.606087] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:12.246 [2024-11-19 18:15:13.607899] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.246 passed 00:16:12.246 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 18:15:13.681625] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.507 [2024-11-19 18:15:13.766168] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:12.507 [2024-11-19 18:15:13.782162] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:12.507 [2024-11-19 18:15:13.787239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.507 passed 00:16:12.507 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 18:15:13.859446] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.507 [2024-11-19 18:15:13.860642] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:12.507 [2024-11-19 18:15:13.862461] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.507 passed 00:16:12.507 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 18:15:13.939201] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.768 [2024-11-19 18:15:14.016167] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:12.768 [2024-11-19 18:15:14.040166] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:12.768 [2024-11-19 18:15:14.045229] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.768 passed 00:16:12.768 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 18:15:14.118405] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:12.768 [2024-11-19 18:15:14.119612] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:12.768 [2024-11-19 18:15:14.119629] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:12.768 [2024-11-19 18:15:14.121428] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:12.768 passed 00:16:12.768 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 18:15:14.198135] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.029 [2024-11-19 18:15:14.291163] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:13.029 [2024-11-19 18:15:14.299161] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:13.029 [2024-11-19 18:15:14.307166] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:13.029 [2024-11-19 18:15:14.315171] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:13.029 [2024-11-19 18:15:14.344227] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.029 passed 00:16:13.029 Test: admin_create_io_sq_verify_pc ...[2024-11-19 18:15:14.416388] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:13.029 [2024-11-19 18:15:14.435171] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:13.029 [2024-11-19 18:15:14.452559] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:13.029 passed 00:16:13.291 Test: admin_create_io_qp_max_qps ...[2024-11-19 18:15:14.528011] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.233 [2024-11-19 18:15:15.640167] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:14.805 [2024-11-19 18:15:16.024269] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:14.805 passed 00:16:14.805 Test: admin_create_io_sq_shared_cq ...[2024-11-19 18:15:16.099740] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:14.805 [2024-11-19 18:15:16.235171] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:14.805 [2024-11-19 18:15:16.272209] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:15.066 passed 00:16:15.066 00:16:15.066 Run Summary: Type Total Ran Passed Failed Inactive 00:16:15.066 suites 1 1 n/a 0 0 00:16:15.066 tests 18 18 18 0 0 00:16:15.066 asserts 360 360 360 0 n/a 00:16:15.066 00:16:15.066 Elapsed time = 1.501 seconds 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1948272 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1948272 ']' 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1948272 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1948272 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1948272' 00:16:15.066 killing process with pid 1948272 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1948272 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1948272 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:15.066 00:16:15.066 real 0m6.224s 00:16:15.066 user 0m17.615s 00:16:15.066 sys 0m0.540s 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:15.066 ************************************ 00:16:15.066 END TEST nvmf_vfio_user_nvme_compliance 00:16:15.066 ************************************ 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.066 18:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:15.327 ************************************ 00:16:15.327 START TEST nvmf_vfio_user_fuzz 00:16:15.327 ************************************ 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:15.327 * Looking for test storage... 00:16:15.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.327 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:15.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.328 --rc genhtml_branch_coverage=1 00:16:15.328 --rc genhtml_function_coverage=1 00:16:15.328 --rc genhtml_legend=1 00:16:15.328 --rc geninfo_all_blocks=1 00:16:15.328 --rc geninfo_unexecuted_blocks=1 00:16:15.328 00:16:15.328 ' 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:15.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.328 --rc genhtml_branch_coverage=1 00:16:15.328 --rc genhtml_function_coverage=1 00:16:15.328 --rc genhtml_legend=1 00:16:15.328 --rc geninfo_all_blocks=1 00:16:15.328 --rc geninfo_unexecuted_blocks=1 00:16:15.328 00:16:15.328 ' 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:15.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.328 --rc genhtml_branch_coverage=1 00:16:15.328 --rc genhtml_function_coverage=1 00:16:15.328 --rc genhtml_legend=1 00:16:15.328 --rc geninfo_all_blocks=1 00:16:15.328 --rc geninfo_unexecuted_blocks=1 00:16:15.328 00:16:15.328 ' 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:15.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.328 --rc genhtml_branch_coverage=1 00:16:15.328 --rc genhtml_function_coverage=1 00:16:15.328 --rc genhtml_legend=1 00:16:15.328 --rc geninfo_all_blocks=1 00:16:15.328 --rc geninfo_unexecuted_blocks=1 00:16:15.328 00:16:15.328 ' 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:15.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:15.328 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1949746 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1949746' 00:16:15.589 Process pid: 1949746 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1949746 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1949746 ']' 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.589 18:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:16.531 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.531 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:16.531 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:17.470 malloc0 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:17.470 18:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:49.794 Fuzzing completed. Shutting down the fuzz application 00:16:49.794 00:16:49.794 Dumping successful admin opcodes: 00:16:49.794 8, 9, 10, 24, 00:16:49.794 Dumping successful io opcodes: 00:16:49.794 0, 00:16:49.794 NS: 0x20000081ef00 I/O qp, Total commands completed: 1250115, total successful commands: 4908, random_seed: 4037444864 00:16:49.794 NS: 0x20000081ef00 admin qp, Total commands completed: 266254, total successful commands: 2144, random_seed: 2809833280 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1949746 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1949746 ']' 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1949746 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1949746 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1949746' 00:16:49.794 killing process with pid 1949746 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1949746 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1949746 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:49.794 00:16:49.794 real 0m32.774s 00:16:49.794 user 0m35.528s 00:16:49.794 sys 0m26.078s 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:49.794 ************************************ 00:16:49.794 END TEST nvmf_vfio_user_fuzz 00:16:49.794 ************************************ 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:49.794 ************************************ 00:16:49.794 START TEST nvmf_auth_target 00:16:49.794 ************************************ 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:49.794 * Looking for test storage... 00:16:49.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:49.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.794 --rc genhtml_branch_coverage=1 00:16:49.794 --rc genhtml_function_coverage=1 00:16:49.794 --rc genhtml_legend=1 00:16:49.794 --rc geninfo_all_blocks=1 00:16:49.794 --rc geninfo_unexecuted_blocks=1 00:16:49.794 00:16:49.794 ' 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:49.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.794 --rc genhtml_branch_coverage=1 00:16:49.794 --rc genhtml_function_coverage=1 00:16:49.794 --rc genhtml_legend=1 00:16:49.794 --rc geninfo_all_blocks=1 00:16:49.794 --rc geninfo_unexecuted_blocks=1 00:16:49.794 00:16:49.794 ' 00:16:49.794 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:49.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.794 --rc genhtml_branch_coverage=1 00:16:49.794 --rc genhtml_function_coverage=1 00:16:49.795 --rc genhtml_legend=1 00:16:49.795 --rc geninfo_all_blocks=1 00:16:49.795 --rc geninfo_unexecuted_blocks=1 00:16:49.795 00:16:49.795 ' 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:49.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.795 --rc genhtml_branch_coverage=1 00:16:49.795 --rc genhtml_function_coverage=1 00:16:49.795 --rc genhtml_legend=1 00:16:49.795 --rc geninfo_all_blocks=1 00:16:49.795 --rc geninfo_unexecuted_blocks=1 00:16:49.795 00:16:49.795 ' 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:49.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:49.795 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:56.383 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.383 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:56.384 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:56.384 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:56.384 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.384 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:56.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:16:56.384 00:16:56.384 --- 10.0.0.2 ping statistics --- 00:16:56.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.384 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:16:56.384 00:16:56.384 --- 10.0.0.1 ping statistics --- 00:16:56.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.384 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1959734 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1959734 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1959734 ']' 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.384 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1959924 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:56.646 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f6e433cb29fc877822a9d202cd933cb251a52af8567e25d6 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KtE 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f6e433cb29fc877822a9d202cd933cb251a52af8567e25d6 0 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f6e433cb29fc877822a9d202cd933cb251a52af8567e25d6 0 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f6e433cb29fc877822a9d202cd933cb251a52af8567e25d6 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KtE 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KtE 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.KtE 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9f74d62e24a9433a821cb5f7c2964f525a31f048c9bd08e3b77d266b68aaaffc 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.nVa 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9f74d62e24a9433a821cb5f7c2964f525a31f048c9bd08e3b77d266b68aaaffc 3 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9f74d62e24a9433a821cb5f7c2964f525a31f048c9bd08e3b77d266b68aaaffc 3 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9f74d62e24a9433a821cb5f7c2964f525a31f048c9bd08e3b77d266b68aaaffc 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:56.646 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.nVa 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.nVa 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.nVa 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d5a15b18f8dad99ee398cbf5380180c2 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:56.908 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1lY 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d5a15b18f8dad99ee398cbf5380180c2 1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d5a15b18f8dad99ee398cbf5380180c2 1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d5a15b18f8dad99ee398cbf5380180c2 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1lY 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1lY 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.1lY 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=780fec22eed0b85657064d7760db3a2af9a3dbc1d868d041 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hnu 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 780fec22eed0b85657064d7760db3a2af9a3dbc1d868d041 2 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 780fec22eed0b85657064d7760db3a2af9a3dbc1d868d041 2 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=780fec22eed0b85657064d7760db3a2af9a3dbc1d868d041 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hnu 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hnu 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.hnu 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9e731a3409e42dd5b8b5449e2ba1bead268eaca381223ab3 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.czZ 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9e731a3409e42dd5b8b5449e2ba1bead268eaca381223ab3 2 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9e731a3409e42dd5b8b5449e2ba1bead268eaca381223ab3 2 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9e731a3409e42dd5b8b5449e2ba1bead268eaca381223ab3 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.czZ 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.czZ 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.czZ 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1cc220086fcf22fd9ec4bc31e1f343b6 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vka 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1cc220086fcf22fd9ec4bc31e1f343b6 1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1cc220086fcf22fd9ec4bc31e1f343b6 1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1cc220086fcf22fd9ec4bc31e1f343b6 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:56.909 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vka 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vka 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.vka 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=63248867a7231e445c1e8ac9ae11d9c2819851888501614e25d06945d589bb22 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.KMh 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 63248867a7231e445c1e8ac9ae11d9c2819851888501614e25d06945d589bb22 3 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 63248867a7231e445c1e8ac9ae11d9c2819851888501614e25d06945d589bb22 3 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=63248867a7231e445c1e8ac9ae11d9c2819851888501614e25d06945d589bb22 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.KMh 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.KMh 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.KMh 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1959734 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1959734 ']' 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.171 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.432 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.432 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1959924 /var/tmp/host.sock 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1959924 ']' 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:57.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KtE 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.KtE 00:16:57.433 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.KtE 00:16:57.693 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.nVa ]] 00:16:57.693 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nVa 00:16:57.693 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.693 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.693 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.693 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nVa 00:16:57.693 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nVa 00:16:57.954 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:57.954 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.1lY 00:16:57.954 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.954 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.954 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.954 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.1lY 00:16:57.954 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.1lY 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.hnu ]] 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hnu 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hnu 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hnu 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.czZ 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.czZ 00:16:58.214 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.czZ 00:16:58.475 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.vka ]] 00:16:58.475 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vka 00:16:58.475 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.475 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.476 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.476 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vka 00:16:58.476 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vka 00:16:58.737 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KMh 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.KMh 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.KMh 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:58.737 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.999 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.261 00:16:59.261 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.261 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.261 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.523 { 00:16:59.523 "cntlid": 1, 00:16:59.523 "qid": 0, 00:16:59.523 "state": "enabled", 00:16:59.523 "thread": "nvmf_tgt_poll_group_000", 00:16:59.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.523 "listen_address": { 00:16:59.523 "trtype": "TCP", 00:16:59.523 "adrfam": "IPv4", 00:16:59.523 "traddr": "10.0.0.2", 00:16:59.523 "trsvcid": "4420" 00:16:59.523 }, 00:16:59.523 "peer_address": { 00:16:59.523 "trtype": "TCP", 00:16:59.523 "adrfam": "IPv4", 00:16:59.523 "traddr": "10.0.0.1", 00:16:59.523 "trsvcid": "50758" 00:16:59.523 }, 00:16:59.523 "auth": { 00:16:59.523 "state": "completed", 00:16:59.523 "digest": "sha256", 00:16:59.523 "dhgroup": "null" 00:16:59.523 } 00:16:59.523 } 00:16:59.523 ]' 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:59.523 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.786 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.786 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.786 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.786 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:16:59.786 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:00.730 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.730 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.730 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.730 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.730 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.730 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.730 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:00.730 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.730 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.991 00:17:00.991 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.991 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.991 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.252 { 00:17:01.252 "cntlid": 3, 00:17:01.252 "qid": 0, 00:17:01.252 "state": "enabled", 00:17:01.252 "thread": "nvmf_tgt_poll_group_000", 00:17:01.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.252 "listen_address": { 00:17:01.252 "trtype": "TCP", 00:17:01.252 "adrfam": "IPv4", 00:17:01.252 "traddr": "10.0.0.2", 00:17:01.252 "trsvcid": "4420" 00:17:01.252 }, 00:17:01.252 "peer_address": { 00:17:01.252 "trtype": "TCP", 00:17:01.252 "adrfam": "IPv4", 00:17:01.252 "traddr": "10.0.0.1", 00:17:01.252 "trsvcid": "50780" 00:17:01.252 }, 00:17:01.252 "auth": { 00:17:01.252 "state": "completed", 00:17:01.252 "digest": "sha256", 00:17:01.252 "dhgroup": "null" 00:17:01.252 } 00:17:01.252 } 00:17:01.252 ]' 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.252 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.513 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:01.513 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:02.085 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.085 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.085 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.085 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.085 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.085 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.085 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:02.085 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.346 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.606 00:17:02.606 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.606 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.606 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.867 { 00:17:02.867 "cntlid": 5, 00:17:02.867 "qid": 0, 00:17:02.867 "state": "enabled", 00:17:02.867 "thread": "nvmf_tgt_poll_group_000", 00:17:02.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.867 "listen_address": { 00:17:02.867 "trtype": "TCP", 00:17:02.867 "adrfam": "IPv4", 00:17:02.867 "traddr": "10.0.0.2", 00:17:02.867 "trsvcid": "4420" 00:17:02.867 }, 00:17:02.867 "peer_address": { 00:17:02.867 "trtype": "TCP", 00:17:02.867 "adrfam": "IPv4", 00:17:02.867 "traddr": "10.0.0.1", 00:17:02.867 "trsvcid": "50802" 00:17:02.867 }, 00:17:02.867 "auth": { 00:17:02.867 "state": "completed", 00:17:02.867 "digest": "sha256", 00:17:02.867 "dhgroup": "null" 00:17:02.867 } 00:17:02.867 } 00:17:02.867 ]' 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.867 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.127 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:03.127 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:03.701 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.701 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.701 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.701 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.701 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.701 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.701 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:03.701 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.963 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.224 00:17:04.224 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.224 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.224 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.485 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.485 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.485 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.485 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.485 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.485 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.485 { 00:17:04.485 "cntlid": 7, 00:17:04.485 "qid": 0, 00:17:04.486 "state": "enabled", 00:17:04.486 "thread": "nvmf_tgt_poll_group_000", 00:17:04.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.486 "listen_address": { 00:17:04.486 "trtype": "TCP", 00:17:04.486 "adrfam": "IPv4", 00:17:04.486 "traddr": "10.0.0.2", 00:17:04.486 "trsvcid": "4420" 00:17:04.486 }, 00:17:04.486 "peer_address": { 00:17:04.486 "trtype": "TCP", 00:17:04.486 "adrfam": "IPv4", 00:17:04.486 "traddr": "10.0.0.1", 00:17:04.486 "trsvcid": "38086" 00:17:04.486 }, 00:17:04.486 "auth": { 00:17:04.486 "state": "completed", 00:17:04.486 "digest": "sha256", 00:17:04.486 "dhgroup": "null" 00:17:04.486 } 00:17:04.486 } 00:17:04.486 ]' 00:17:04.486 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.486 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.486 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.486 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:04.486 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.486 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.486 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.486 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.747 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:04.747 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:05.318 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.318 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.318 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.318 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.318 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.318 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.318 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.318 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.318 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.579 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.840 00:17:05.840 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.840 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.840 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.100 { 00:17:06.100 "cntlid": 9, 00:17:06.100 "qid": 0, 00:17:06.100 "state": "enabled", 00:17:06.100 "thread": "nvmf_tgt_poll_group_000", 00:17:06.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.100 "listen_address": { 00:17:06.100 "trtype": "TCP", 00:17:06.100 "adrfam": "IPv4", 00:17:06.100 "traddr": "10.0.0.2", 00:17:06.100 "trsvcid": "4420" 00:17:06.100 }, 00:17:06.100 "peer_address": { 00:17:06.100 "trtype": "TCP", 00:17:06.100 "adrfam": "IPv4", 00:17:06.100 "traddr": "10.0.0.1", 00:17:06.100 "trsvcid": "38110" 00:17:06.100 }, 00:17:06.100 "auth": { 00:17:06.100 "state": "completed", 00:17:06.100 "digest": "sha256", 00:17:06.100 "dhgroup": "ffdhe2048" 00:17:06.100 } 00:17:06.100 } 00:17:06.100 ]' 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.100 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.361 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:06.361 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:06.933 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.933 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.933 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.933 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.933 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.933 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.933 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:06.933 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.194 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.455 00:17:07.455 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.455 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.455 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.716 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.716 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.716 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.716 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.716 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.716 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.716 { 00:17:07.716 "cntlid": 11, 00:17:07.716 "qid": 0, 00:17:07.716 "state": "enabled", 00:17:07.716 "thread": "nvmf_tgt_poll_group_000", 00:17:07.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.716 "listen_address": { 00:17:07.716 "trtype": "TCP", 00:17:07.716 "adrfam": "IPv4", 00:17:07.716 "traddr": "10.0.0.2", 00:17:07.716 "trsvcid": "4420" 00:17:07.716 }, 00:17:07.716 "peer_address": { 00:17:07.716 "trtype": "TCP", 00:17:07.716 "adrfam": "IPv4", 00:17:07.716 "traddr": "10.0.0.1", 00:17:07.716 "trsvcid": "38136" 00:17:07.716 }, 00:17:07.716 "auth": { 00:17:07.716 "state": "completed", 00:17:07.716 "digest": "sha256", 00:17:07.716 "dhgroup": "ffdhe2048" 00:17:07.716 } 00:17:07.716 } 00:17:07.716 ]' 00:17:07.716 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.716 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.716 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.716 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.716 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.716 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.716 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.716 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.976 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:07.976 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:08.547 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.547 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.547 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.547 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.547 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.547 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.547 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.548 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.809 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.071 00:17:09.071 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.071 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.071 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.332 { 00:17:09.332 "cntlid": 13, 00:17:09.332 "qid": 0, 00:17:09.332 "state": "enabled", 00:17:09.332 "thread": "nvmf_tgt_poll_group_000", 00:17:09.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:09.332 "listen_address": { 00:17:09.332 "trtype": "TCP", 00:17:09.332 "adrfam": "IPv4", 00:17:09.332 "traddr": "10.0.0.2", 00:17:09.332 "trsvcid": "4420" 00:17:09.332 }, 00:17:09.332 "peer_address": { 00:17:09.332 "trtype": "TCP", 00:17:09.332 "adrfam": "IPv4", 00:17:09.332 "traddr": "10.0.0.1", 00:17:09.332 "trsvcid": "38180" 00:17:09.332 }, 00:17:09.332 "auth": { 00:17:09.332 "state": "completed", 00:17:09.332 "digest": "sha256", 00:17:09.332 "dhgroup": "ffdhe2048" 00:17:09.332 } 00:17:09.332 } 00:17:09.332 ]' 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.332 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.593 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:09.593 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:10.163 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.163 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.163 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.163 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.163 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.163 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.163 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:10.163 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.423 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.424 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.424 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.424 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.684 00:17:10.684 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.684 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.684 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.945 { 00:17:10.945 "cntlid": 15, 00:17:10.945 "qid": 0, 00:17:10.945 "state": "enabled", 00:17:10.945 "thread": "nvmf_tgt_poll_group_000", 00:17:10.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.945 "listen_address": { 00:17:10.945 "trtype": "TCP", 00:17:10.945 "adrfam": "IPv4", 00:17:10.945 "traddr": "10.0.0.2", 00:17:10.945 "trsvcid": "4420" 00:17:10.945 }, 00:17:10.945 "peer_address": { 00:17:10.945 "trtype": "TCP", 00:17:10.945 "adrfam": "IPv4", 00:17:10.945 "traddr": "10.0.0.1", 00:17:10.945 "trsvcid": "38208" 00:17:10.945 }, 00:17:10.945 "auth": { 00:17:10.945 "state": "completed", 00:17:10.945 "digest": "sha256", 00:17:10.945 "dhgroup": "ffdhe2048" 00:17:10.945 } 00:17:10.945 } 00:17:10.945 ]' 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.945 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.206 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:11.206 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:11.776 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.776 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.776 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.776 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.776 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.776 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.776 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.776 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:11.776 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.037 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.298 00:17:12.298 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.298 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.298 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.560 { 00:17:12.560 "cntlid": 17, 00:17:12.560 "qid": 0, 00:17:12.560 "state": "enabled", 00:17:12.560 "thread": "nvmf_tgt_poll_group_000", 00:17:12.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.560 "listen_address": { 00:17:12.560 "trtype": "TCP", 00:17:12.560 "adrfam": "IPv4", 00:17:12.560 "traddr": "10.0.0.2", 00:17:12.560 "trsvcid": "4420" 00:17:12.560 }, 00:17:12.560 "peer_address": { 00:17:12.560 "trtype": "TCP", 00:17:12.560 "adrfam": "IPv4", 00:17:12.560 "traddr": "10.0.0.1", 00:17:12.560 "trsvcid": "38232" 00:17:12.560 }, 00:17:12.560 "auth": { 00:17:12.560 "state": "completed", 00:17:12.560 "digest": "sha256", 00:17:12.560 "dhgroup": "ffdhe3072" 00:17:12.560 } 00:17:12.560 } 00:17:12.560 ]' 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.560 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.821 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:12.821 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:13.392 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.392 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.392 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.392 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.392 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.392 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.392 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.392 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:13.657 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:13.657 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.657 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.657 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:13.657 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.657 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.657 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.657 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.657 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.657 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.657 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.657 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.657 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.917 00:17:13.917 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.917 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.917 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.178 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.178 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.179 { 00:17:14.179 "cntlid": 19, 00:17:14.179 "qid": 0, 00:17:14.179 "state": "enabled", 00:17:14.179 "thread": "nvmf_tgt_poll_group_000", 00:17:14.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.179 "listen_address": { 00:17:14.179 "trtype": "TCP", 00:17:14.179 "adrfam": "IPv4", 00:17:14.179 "traddr": "10.0.0.2", 00:17:14.179 "trsvcid": "4420" 00:17:14.179 }, 00:17:14.179 "peer_address": { 00:17:14.179 "trtype": "TCP", 00:17:14.179 "adrfam": "IPv4", 00:17:14.179 "traddr": "10.0.0.1", 00:17:14.179 "trsvcid": "45532" 00:17:14.179 }, 00:17:14.179 "auth": { 00:17:14.179 "state": "completed", 00:17:14.179 "digest": "sha256", 00:17:14.179 "dhgroup": "ffdhe3072" 00:17:14.179 } 00:17:14.179 } 00:17:14.179 ]' 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.179 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.439 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:14.439 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:15.011 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.011 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.011 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.011 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.011 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.011 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.011 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.011 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.271 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.532 00:17:15.532 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.532 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.532 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.792 { 00:17:15.792 "cntlid": 21, 00:17:15.792 "qid": 0, 00:17:15.792 "state": "enabled", 00:17:15.792 "thread": "nvmf_tgt_poll_group_000", 00:17:15.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.792 "listen_address": { 00:17:15.792 "trtype": "TCP", 00:17:15.792 "adrfam": "IPv4", 00:17:15.792 "traddr": "10.0.0.2", 00:17:15.792 "trsvcid": "4420" 00:17:15.792 }, 00:17:15.792 "peer_address": { 00:17:15.792 "trtype": "TCP", 00:17:15.792 "adrfam": "IPv4", 00:17:15.792 "traddr": "10.0.0.1", 00:17:15.792 "trsvcid": "45572" 00:17:15.792 }, 00:17:15.792 "auth": { 00:17:15.792 "state": "completed", 00:17:15.792 "digest": "sha256", 00:17:15.792 "dhgroup": "ffdhe3072" 00:17:15.792 } 00:17:15.792 } 00:17:15.792 ]' 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.792 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.053 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:16.053 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:16.624 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.624 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.624 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.624 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.624 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.624 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.624 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:16.624 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.884 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.144 00:17:17.144 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.144 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.144 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.405 { 00:17:17.405 "cntlid": 23, 00:17:17.405 "qid": 0, 00:17:17.405 "state": "enabled", 00:17:17.405 "thread": "nvmf_tgt_poll_group_000", 00:17:17.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.405 "listen_address": { 00:17:17.405 "trtype": "TCP", 00:17:17.405 "adrfam": "IPv4", 00:17:17.405 "traddr": "10.0.0.2", 00:17:17.405 "trsvcid": "4420" 00:17:17.405 }, 00:17:17.405 "peer_address": { 00:17:17.405 "trtype": "TCP", 00:17:17.405 "adrfam": "IPv4", 00:17:17.405 "traddr": "10.0.0.1", 00:17:17.405 "trsvcid": "45606" 00:17:17.405 }, 00:17:17.405 "auth": { 00:17:17.405 "state": "completed", 00:17:17.405 "digest": "sha256", 00:17:17.405 "dhgroup": "ffdhe3072" 00:17:17.405 } 00:17:17.405 } 00:17:17.405 ]' 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.405 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.666 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:17.666 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:18.246 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.246 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.246 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.246 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.246 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.246 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.246 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.246 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:18.246 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.506 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.766 00:17:18.766 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.766 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.766 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.027 { 00:17:19.027 "cntlid": 25, 00:17:19.027 "qid": 0, 00:17:19.027 "state": "enabled", 00:17:19.027 "thread": "nvmf_tgt_poll_group_000", 00:17:19.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.027 "listen_address": { 00:17:19.027 "trtype": "TCP", 00:17:19.027 "adrfam": "IPv4", 00:17:19.027 "traddr": "10.0.0.2", 00:17:19.027 "trsvcid": "4420" 00:17:19.027 }, 00:17:19.027 "peer_address": { 00:17:19.027 "trtype": "TCP", 00:17:19.027 "adrfam": "IPv4", 00:17:19.027 "traddr": "10.0.0.1", 00:17:19.027 "trsvcid": "45634" 00:17:19.027 }, 00:17:19.027 "auth": { 00:17:19.027 "state": "completed", 00:17:19.027 "digest": "sha256", 00:17:19.027 "dhgroup": "ffdhe4096" 00:17:19.027 } 00:17:19.027 } 00:17:19.027 ]' 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.027 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.288 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:19.288 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:19.860 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.120 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.120 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.120 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.120 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.120 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.120 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:20.120 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.380 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.640 00:17:20.640 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.640 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.640 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.640 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.902 { 00:17:20.902 "cntlid": 27, 00:17:20.902 "qid": 0, 00:17:20.902 "state": "enabled", 00:17:20.902 "thread": "nvmf_tgt_poll_group_000", 00:17:20.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.902 "listen_address": { 00:17:20.902 "trtype": "TCP", 00:17:20.902 "adrfam": "IPv4", 00:17:20.902 "traddr": "10.0.0.2", 00:17:20.902 "trsvcid": "4420" 00:17:20.902 }, 00:17:20.902 "peer_address": { 00:17:20.902 "trtype": "TCP", 00:17:20.902 "adrfam": "IPv4", 00:17:20.902 "traddr": "10.0.0.1", 00:17:20.902 "trsvcid": "45670" 00:17:20.902 }, 00:17:20.902 "auth": { 00:17:20.902 "state": "completed", 00:17:20.902 "digest": "sha256", 00:17:20.902 "dhgroup": "ffdhe4096" 00:17:20.902 } 00:17:20.902 } 00:17:20.902 ]' 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.902 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.163 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:21.163 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:21.734 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.734 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.734 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.734 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.734 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.734 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.734 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:21.734 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.994 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.254 00:17:22.254 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.254 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.254 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.515 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.515 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.515 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.516 { 00:17:22.516 "cntlid": 29, 00:17:22.516 "qid": 0, 00:17:22.516 "state": "enabled", 00:17:22.516 "thread": "nvmf_tgt_poll_group_000", 00:17:22.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.516 "listen_address": { 00:17:22.516 "trtype": "TCP", 00:17:22.516 "adrfam": "IPv4", 00:17:22.516 "traddr": "10.0.0.2", 00:17:22.516 "trsvcid": "4420" 00:17:22.516 }, 00:17:22.516 "peer_address": { 00:17:22.516 "trtype": "TCP", 00:17:22.516 "adrfam": "IPv4", 00:17:22.516 "traddr": "10.0.0.1", 00:17:22.516 "trsvcid": "45698" 00:17:22.516 }, 00:17:22.516 "auth": { 00:17:22.516 "state": "completed", 00:17:22.516 "digest": "sha256", 00:17:22.516 "dhgroup": "ffdhe4096" 00:17:22.516 } 00:17:22.516 } 00:17:22.516 ]' 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.516 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.777 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:22.777 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:23.348 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.348 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.348 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.348 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.348 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.348 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.348 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:23.348 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.609 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.870 00:17:23.870 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.870 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.870 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.131 { 00:17:24.131 "cntlid": 31, 00:17:24.131 "qid": 0, 00:17:24.131 "state": "enabled", 00:17:24.131 "thread": "nvmf_tgt_poll_group_000", 00:17:24.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.131 "listen_address": { 00:17:24.131 "trtype": "TCP", 00:17:24.131 "adrfam": "IPv4", 00:17:24.131 "traddr": "10.0.0.2", 00:17:24.131 "trsvcid": "4420" 00:17:24.131 }, 00:17:24.131 "peer_address": { 00:17:24.131 "trtype": "TCP", 00:17:24.131 "adrfam": "IPv4", 00:17:24.131 "traddr": "10.0.0.1", 00:17:24.131 "trsvcid": "55888" 00:17:24.131 }, 00:17:24.131 "auth": { 00:17:24.131 "state": "completed", 00:17:24.131 "digest": "sha256", 00:17:24.131 "dhgroup": "ffdhe4096" 00:17:24.131 } 00:17:24.131 } 00:17:24.131 ]' 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.131 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.391 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:24.392 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:24.962 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.962 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.962 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.962 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.962 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.962 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.962 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.962 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:24.962 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.223 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.484 00:17:25.484 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.484 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.484 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.743 { 00:17:25.743 "cntlid": 33, 00:17:25.743 "qid": 0, 00:17:25.743 "state": "enabled", 00:17:25.743 "thread": "nvmf_tgt_poll_group_000", 00:17:25.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.743 "listen_address": { 00:17:25.743 "trtype": "TCP", 00:17:25.743 "adrfam": "IPv4", 00:17:25.743 "traddr": "10.0.0.2", 00:17:25.743 "trsvcid": "4420" 00:17:25.743 }, 00:17:25.743 "peer_address": { 00:17:25.743 "trtype": "TCP", 00:17:25.743 "adrfam": "IPv4", 00:17:25.743 "traddr": "10.0.0.1", 00:17:25.743 "trsvcid": "55920" 00:17:25.743 }, 00:17:25.743 "auth": { 00:17:25.743 "state": "completed", 00:17:25.743 "digest": "sha256", 00:17:25.743 "dhgroup": "ffdhe6144" 00:17:25.743 } 00:17:25.743 } 00:17:25.743 ]' 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.743 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.003 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.003 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.003 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.003 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:26.003 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.943 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.203 00:17:27.203 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.203 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.203 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.463 { 00:17:27.463 "cntlid": 35, 00:17:27.463 "qid": 0, 00:17:27.463 "state": "enabled", 00:17:27.463 "thread": "nvmf_tgt_poll_group_000", 00:17:27.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.463 "listen_address": { 00:17:27.463 "trtype": "TCP", 00:17:27.463 "adrfam": "IPv4", 00:17:27.463 "traddr": "10.0.0.2", 00:17:27.463 "trsvcid": "4420" 00:17:27.463 }, 00:17:27.463 "peer_address": { 00:17:27.463 "trtype": "TCP", 00:17:27.463 "adrfam": "IPv4", 00:17:27.463 "traddr": "10.0.0.1", 00:17:27.463 "trsvcid": "55942" 00:17:27.463 }, 00:17:27.463 "auth": { 00:17:27.463 "state": "completed", 00:17:27.463 "digest": "sha256", 00:17:27.463 "dhgroup": "ffdhe6144" 00:17:27.463 } 00:17:27.463 } 00:17:27.463 ]' 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.463 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.725 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:27.725 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:28.296 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.558 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.130 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.130 { 00:17:29.130 "cntlid": 37, 00:17:29.130 "qid": 0, 00:17:29.130 "state": "enabled", 00:17:29.130 "thread": "nvmf_tgt_poll_group_000", 00:17:29.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.130 "listen_address": { 00:17:29.130 "trtype": "TCP", 00:17:29.130 "adrfam": "IPv4", 00:17:29.130 "traddr": "10.0.0.2", 00:17:29.130 "trsvcid": "4420" 00:17:29.130 }, 00:17:29.130 "peer_address": { 00:17:29.130 "trtype": "TCP", 00:17:29.130 "adrfam": "IPv4", 00:17:29.130 "traddr": "10.0.0.1", 00:17:29.130 "trsvcid": "55984" 00:17:29.130 }, 00:17:29.130 "auth": { 00:17:29.130 "state": "completed", 00:17:29.130 "digest": "sha256", 00:17:29.130 "dhgroup": "ffdhe6144" 00:17:29.130 } 00:17:29.130 } 00:17:29.130 ]' 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.130 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.391 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.391 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.391 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.391 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.391 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.391 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:29.391 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.335 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.596 00:17:30.596 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.596 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.596 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.857 { 00:17:30.857 "cntlid": 39, 00:17:30.857 "qid": 0, 00:17:30.857 "state": "enabled", 00:17:30.857 "thread": "nvmf_tgt_poll_group_000", 00:17:30.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.857 "listen_address": { 00:17:30.857 "trtype": "TCP", 00:17:30.857 "adrfam": "IPv4", 00:17:30.857 "traddr": "10.0.0.2", 00:17:30.857 "trsvcid": "4420" 00:17:30.857 }, 00:17:30.857 "peer_address": { 00:17:30.857 "trtype": "TCP", 00:17:30.857 "adrfam": "IPv4", 00:17:30.857 "traddr": "10.0.0.1", 00:17:30.857 "trsvcid": "56012" 00:17:30.857 }, 00:17:30.857 "auth": { 00:17:30.857 "state": "completed", 00:17:30.857 "digest": "sha256", 00:17:30.857 "dhgroup": "ffdhe6144" 00:17:30.857 } 00:17:30.857 } 00:17:30.857 ]' 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.857 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.119 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.119 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.119 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.119 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:31.119 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.062 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.634 00:17:32.634 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.634 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.634 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.634 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.634 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.634 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.634 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.634 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.634 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.634 { 00:17:32.634 "cntlid": 41, 00:17:32.634 "qid": 0, 00:17:32.634 "state": "enabled", 00:17:32.634 "thread": "nvmf_tgt_poll_group_000", 00:17:32.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.634 "listen_address": { 00:17:32.634 "trtype": "TCP", 00:17:32.634 "adrfam": "IPv4", 00:17:32.634 "traddr": "10.0.0.2", 00:17:32.634 "trsvcid": "4420" 00:17:32.634 }, 00:17:32.634 "peer_address": { 00:17:32.634 "trtype": "TCP", 00:17:32.634 "adrfam": "IPv4", 00:17:32.634 "traddr": "10.0.0.1", 00:17:32.634 "trsvcid": "56042" 00:17:32.634 }, 00:17:32.634 "auth": { 00:17:32.634 "state": "completed", 00:17:32.634 "digest": "sha256", 00:17:32.634 "dhgroup": "ffdhe8192" 00:17:32.634 } 00:17:32.634 } 00:17:32.634 ]' 00:17:32.634 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.895 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.895 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.895 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.895 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.895 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.895 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.895 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.156 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:33.156 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:33.727 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.727 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.727 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.727 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.727 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.727 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.727 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.727 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.988 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.248 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.510 { 00:17:34.510 "cntlid": 43, 00:17:34.510 "qid": 0, 00:17:34.510 "state": "enabled", 00:17:34.510 "thread": "nvmf_tgt_poll_group_000", 00:17:34.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.510 "listen_address": { 00:17:34.510 "trtype": "TCP", 00:17:34.510 "adrfam": "IPv4", 00:17:34.510 "traddr": "10.0.0.2", 00:17:34.510 "trsvcid": "4420" 00:17:34.510 }, 00:17:34.510 "peer_address": { 00:17:34.510 "trtype": "TCP", 00:17:34.510 "adrfam": "IPv4", 00:17:34.510 "traddr": "10.0.0.1", 00:17:34.510 "trsvcid": "36330" 00:17:34.510 }, 00:17:34.510 "auth": { 00:17:34.510 "state": "completed", 00:17:34.510 "digest": "sha256", 00:17:34.510 "dhgroup": "ffdhe8192" 00:17:34.510 } 00:17:34.510 } 00:17:34.510 ]' 00:17:34.510 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.772 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.772 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.772 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.772 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.772 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.772 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.772 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.772 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:34.772 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:35.715 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.715 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.715 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.715 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.715 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.715 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.715 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.715 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:35.715 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:35.715 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.715 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.715 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.715 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.716 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.716 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.716 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.716 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.716 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.716 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.716 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.716 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.287 00:17:36.287 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.287 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.288 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.288 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.288 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.288 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.288 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.288 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.288 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.288 { 00:17:36.288 "cntlid": 45, 00:17:36.288 "qid": 0, 00:17:36.288 "state": "enabled", 00:17:36.288 "thread": "nvmf_tgt_poll_group_000", 00:17:36.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.288 "listen_address": { 00:17:36.288 "trtype": "TCP", 00:17:36.288 "adrfam": "IPv4", 00:17:36.288 "traddr": "10.0.0.2", 00:17:36.288 "trsvcid": "4420" 00:17:36.288 }, 00:17:36.288 "peer_address": { 00:17:36.288 "trtype": "TCP", 00:17:36.288 "adrfam": "IPv4", 00:17:36.288 "traddr": "10.0.0.1", 00:17:36.288 "trsvcid": "36366" 00:17:36.288 }, 00:17:36.288 "auth": { 00:17:36.288 "state": "completed", 00:17:36.288 "digest": "sha256", 00:17:36.288 "dhgroup": "ffdhe8192" 00:17:36.288 } 00:17:36.288 } 00:17:36.288 ]' 00:17:36.288 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.549 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.549 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.549 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.549 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.549 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.549 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.549 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.810 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:36.810 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:37.380 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.380 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.380 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.380 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.380 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.380 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.380 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:37.380 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.641 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.900 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.161 { 00:17:38.161 "cntlid": 47, 00:17:38.161 "qid": 0, 00:17:38.161 "state": "enabled", 00:17:38.161 "thread": "nvmf_tgt_poll_group_000", 00:17:38.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.161 "listen_address": { 00:17:38.161 "trtype": "TCP", 00:17:38.161 "adrfam": "IPv4", 00:17:38.161 "traddr": "10.0.0.2", 00:17:38.161 "trsvcid": "4420" 00:17:38.161 }, 00:17:38.161 "peer_address": { 00:17:38.161 "trtype": "TCP", 00:17:38.161 "adrfam": "IPv4", 00:17:38.161 "traddr": "10.0.0.1", 00:17:38.161 "trsvcid": "36408" 00:17:38.161 }, 00:17:38.161 "auth": { 00:17:38.161 "state": "completed", 00:17:38.161 "digest": "sha256", 00:17:38.161 "dhgroup": "ffdhe8192" 00:17:38.161 } 00:17:38.161 } 00:17:38.161 ]' 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.161 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.422 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.422 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.422 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.422 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.422 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.422 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:38.422 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.362 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.363 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.623 00:17:39.623 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.623 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.623 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.884 { 00:17:39.884 "cntlid": 49, 00:17:39.884 "qid": 0, 00:17:39.884 "state": "enabled", 00:17:39.884 "thread": "nvmf_tgt_poll_group_000", 00:17:39.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.884 "listen_address": { 00:17:39.884 "trtype": "TCP", 00:17:39.884 "adrfam": "IPv4", 00:17:39.884 "traddr": "10.0.0.2", 00:17:39.884 "trsvcid": "4420" 00:17:39.884 }, 00:17:39.884 "peer_address": { 00:17:39.884 "trtype": "TCP", 00:17:39.884 "adrfam": "IPv4", 00:17:39.884 "traddr": "10.0.0.1", 00:17:39.884 "trsvcid": "36430" 00:17:39.884 }, 00:17:39.884 "auth": { 00:17:39.884 "state": "completed", 00:17:39.884 "digest": "sha384", 00:17:39.884 "dhgroup": "null" 00:17:39.884 } 00:17:39.884 } 00:17:39.884 ]' 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.884 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.145 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:40.145 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:40.715 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.715 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.715 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.715 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.715 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.715 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.715 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:40.715 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:40.975 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:40.975 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.975 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.975 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:40.975 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.975 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.976 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.976 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.976 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.976 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.976 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.976 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.976 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.236 00:17:41.236 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.236 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.236 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.497 { 00:17:41.497 "cntlid": 51, 00:17:41.497 "qid": 0, 00:17:41.497 "state": "enabled", 00:17:41.497 "thread": "nvmf_tgt_poll_group_000", 00:17:41.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.497 "listen_address": { 00:17:41.497 "trtype": "TCP", 00:17:41.497 "adrfam": "IPv4", 00:17:41.497 "traddr": "10.0.0.2", 00:17:41.497 "trsvcid": "4420" 00:17:41.497 }, 00:17:41.497 "peer_address": { 00:17:41.497 "trtype": "TCP", 00:17:41.497 "adrfam": "IPv4", 00:17:41.497 "traddr": "10.0.0.1", 00:17:41.497 "trsvcid": "36464" 00:17:41.497 }, 00:17:41.497 "auth": { 00:17:41.497 "state": "completed", 00:17:41.497 "digest": "sha384", 00:17:41.497 "dhgroup": "null" 00:17:41.497 } 00:17:41.497 } 00:17:41.497 ]' 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.497 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.498 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.498 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.498 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.759 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:41.759 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:42.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:42.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.590 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.591 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.851 00:17:42.851 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.851 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.851 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.112 { 00:17:43.112 "cntlid": 53, 00:17:43.112 "qid": 0, 00:17:43.112 "state": "enabled", 00:17:43.112 "thread": "nvmf_tgt_poll_group_000", 00:17:43.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.112 "listen_address": { 00:17:43.112 "trtype": "TCP", 00:17:43.112 "adrfam": "IPv4", 00:17:43.112 "traddr": "10.0.0.2", 00:17:43.112 "trsvcid": "4420" 00:17:43.112 }, 00:17:43.112 "peer_address": { 00:17:43.112 "trtype": "TCP", 00:17:43.112 "adrfam": "IPv4", 00:17:43.112 "traddr": "10.0.0.1", 00:17:43.112 "trsvcid": "36486" 00:17:43.112 }, 00:17:43.112 "auth": { 00:17:43.112 "state": "completed", 00:17:43.112 "digest": "sha384", 00:17:43.112 "dhgroup": "null" 00:17:43.112 } 00:17:43.112 } 00:17:43.112 ]' 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.112 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.373 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:43.373 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:43.947 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.947 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.947 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.947 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.947 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.947 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.947 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:43.947 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.210 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.472 00:17:44.472 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.472 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.472 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.734 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.734 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.734 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.734 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.734 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.734 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.734 { 00:17:44.734 "cntlid": 55, 00:17:44.734 "qid": 0, 00:17:44.734 "state": "enabled", 00:17:44.734 "thread": "nvmf_tgt_poll_group_000", 00:17:44.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.734 "listen_address": { 00:17:44.734 "trtype": "TCP", 00:17:44.734 "adrfam": "IPv4", 00:17:44.734 "traddr": "10.0.0.2", 00:17:44.734 "trsvcid": "4420" 00:17:44.734 }, 00:17:44.734 "peer_address": { 00:17:44.734 "trtype": "TCP", 00:17:44.734 "adrfam": "IPv4", 00:17:44.734 "traddr": "10.0.0.1", 00:17:44.734 "trsvcid": "35430" 00:17:44.734 }, 00:17:44.734 "auth": { 00:17:44.734 "state": "completed", 00:17:44.734 "digest": "sha384", 00:17:44.734 "dhgroup": "null" 00:17:44.734 } 00:17:44.734 } 00:17:44.734 ]' 00:17:44.734 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.734 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.734 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.734 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:44.734 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.734 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.734 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.734 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.995 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:44.995 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:45.565 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.565 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.565 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.565 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.565 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.565 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.565 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.565 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.565 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.826 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:45.826 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.826 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.826 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:45.826 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.826 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.827 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.827 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.827 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.827 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.827 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.827 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.827 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.087 00:17:46.087 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.087 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.087 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.408 { 00:17:46.408 "cntlid": 57, 00:17:46.408 "qid": 0, 00:17:46.408 "state": "enabled", 00:17:46.408 "thread": "nvmf_tgt_poll_group_000", 00:17:46.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.408 "listen_address": { 00:17:46.408 "trtype": "TCP", 00:17:46.408 "adrfam": "IPv4", 00:17:46.408 "traddr": "10.0.0.2", 00:17:46.408 "trsvcid": "4420" 00:17:46.408 }, 00:17:46.408 "peer_address": { 00:17:46.408 "trtype": "TCP", 00:17:46.408 "adrfam": "IPv4", 00:17:46.408 "traddr": "10.0.0.1", 00:17:46.408 "trsvcid": "35454" 00:17:46.408 }, 00:17:46.408 "auth": { 00:17:46.408 "state": "completed", 00:17:46.408 "digest": "sha384", 00:17:46.408 "dhgroup": "ffdhe2048" 00:17:46.408 } 00:17:46.408 } 00:17:46.408 ]' 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.408 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.744 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:46.744 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:47.366 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.366 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.366 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.366 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.366 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.366 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.366 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:47.366 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:47.646 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:47.646 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.646 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.646 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.647 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.647 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.647 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.647 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.647 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.647 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.647 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.647 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.647 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.647 00:17:47.647 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.647 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.647 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.906 { 00:17:47.906 "cntlid": 59, 00:17:47.906 "qid": 0, 00:17:47.906 "state": "enabled", 00:17:47.906 "thread": "nvmf_tgt_poll_group_000", 00:17:47.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.906 "listen_address": { 00:17:47.906 "trtype": "TCP", 00:17:47.906 "adrfam": "IPv4", 00:17:47.906 "traddr": "10.0.0.2", 00:17:47.906 "trsvcid": "4420" 00:17:47.906 }, 00:17:47.906 "peer_address": { 00:17:47.906 "trtype": "TCP", 00:17:47.906 "adrfam": "IPv4", 00:17:47.906 "traddr": "10.0.0.1", 00:17:47.906 "trsvcid": "35470" 00:17:47.906 }, 00:17:47.906 "auth": { 00:17:47.906 "state": "completed", 00:17:47.906 "digest": "sha384", 00:17:47.906 "dhgroup": "ffdhe2048" 00:17:47.906 } 00:17:47.906 } 00:17:47.906 ]' 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.906 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.166 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.166 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.166 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.166 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:48.166 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:49.105 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.106 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.367 00:17:49.367 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.367 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.367 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.628 { 00:17:49.628 "cntlid": 61, 00:17:49.628 "qid": 0, 00:17:49.628 "state": "enabled", 00:17:49.628 "thread": "nvmf_tgt_poll_group_000", 00:17:49.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.628 "listen_address": { 00:17:49.628 "trtype": "TCP", 00:17:49.628 "adrfam": "IPv4", 00:17:49.628 "traddr": "10.0.0.2", 00:17:49.628 "trsvcid": "4420" 00:17:49.628 }, 00:17:49.628 "peer_address": { 00:17:49.628 "trtype": "TCP", 00:17:49.628 "adrfam": "IPv4", 00:17:49.628 "traddr": "10.0.0.1", 00:17:49.628 "trsvcid": "35504" 00:17:49.628 }, 00:17:49.628 "auth": { 00:17:49.628 "state": "completed", 00:17:49.628 "digest": "sha384", 00:17:49.628 "dhgroup": "ffdhe2048" 00:17:49.628 } 00:17:49.628 } 00:17:49.628 ]' 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.628 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.629 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.889 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:49.889 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:50.459 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.459 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.459 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.459 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.459 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.459 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.459 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:50.459 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.720 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.981 00:17:50.981 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.981 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.981 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.245 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.245 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.245 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.245 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.245 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.245 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.245 { 00:17:51.246 "cntlid": 63, 00:17:51.246 "qid": 0, 00:17:51.246 "state": "enabled", 00:17:51.246 "thread": "nvmf_tgt_poll_group_000", 00:17:51.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.246 "listen_address": { 00:17:51.246 "trtype": "TCP", 00:17:51.246 "adrfam": "IPv4", 00:17:51.246 "traddr": "10.0.0.2", 00:17:51.246 "trsvcid": "4420" 00:17:51.246 }, 00:17:51.246 "peer_address": { 00:17:51.246 "trtype": "TCP", 00:17:51.246 "adrfam": "IPv4", 00:17:51.246 "traddr": "10.0.0.1", 00:17:51.246 "trsvcid": "35532" 00:17:51.246 }, 00:17:51.246 "auth": { 00:17:51.246 "state": "completed", 00:17:51.246 "digest": "sha384", 00:17:51.246 "dhgroup": "ffdhe2048" 00:17:51.246 } 00:17:51.246 } 00:17:51.246 ]' 00:17:51.246 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.246 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.246 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.246 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.246 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.247 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.247 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.247 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.510 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:51.511 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:52.081 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.081 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.081 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.081 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.081 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.081 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.081 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.081 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.081 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.341 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.601 00:17:52.601 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.601 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.601 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.862 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.862 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.862 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.862 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.862 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.862 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.862 { 00:17:52.862 "cntlid": 65, 00:17:52.862 "qid": 0, 00:17:52.862 "state": "enabled", 00:17:52.862 "thread": "nvmf_tgt_poll_group_000", 00:17:52.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.862 "listen_address": { 00:17:52.862 "trtype": "TCP", 00:17:52.862 "adrfam": "IPv4", 00:17:52.862 "traddr": "10.0.0.2", 00:17:52.862 "trsvcid": "4420" 00:17:52.862 }, 00:17:52.862 "peer_address": { 00:17:52.862 "trtype": "TCP", 00:17:52.862 "adrfam": "IPv4", 00:17:52.863 "traddr": "10.0.0.1", 00:17:52.863 "trsvcid": "35556" 00:17:52.863 }, 00:17:52.863 "auth": { 00:17:52.863 "state": "completed", 00:17:52.863 "digest": "sha384", 00:17:52.863 "dhgroup": "ffdhe3072" 00:17:52.863 } 00:17:52.863 } 00:17:52.863 ]' 00:17:52.863 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.863 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.863 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.863 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.863 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.863 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.863 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.863 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.123 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:53.123 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:53.694 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.694 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.694 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.694 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.694 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.694 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.694 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.694 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.954 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.215 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.215 { 00:17:54.215 "cntlid": 67, 00:17:54.215 "qid": 0, 00:17:54.215 "state": "enabled", 00:17:54.215 "thread": "nvmf_tgt_poll_group_000", 00:17:54.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.215 "listen_address": { 00:17:54.215 "trtype": "TCP", 00:17:54.215 "adrfam": "IPv4", 00:17:54.215 "traddr": "10.0.0.2", 00:17:54.215 "trsvcid": "4420" 00:17:54.215 }, 00:17:54.215 "peer_address": { 00:17:54.215 "trtype": "TCP", 00:17:54.215 "adrfam": "IPv4", 00:17:54.215 "traddr": "10.0.0.1", 00:17:54.215 "trsvcid": "52578" 00:17:54.215 }, 00:17:54.215 "auth": { 00:17:54.215 "state": "completed", 00:17:54.215 "digest": "sha384", 00:17:54.215 "dhgroup": "ffdhe3072" 00:17:54.215 } 00:17:54.215 } 00:17:54.215 ]' 00:17:54.215 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.475 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.475 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.475 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.475 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.475 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.475 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.475 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.736 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:54.736 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:17:55.306 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.306 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.306 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.306 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.306 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.306 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.306 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:55.306 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.567 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.828 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.828 { 00:17:55.828 "cntlid": 69, 00:17:55.828 "qid": 0, 00:17:55.828 "state": "enabled", 00:17:55.828 "thread": "nvmf_tgt_poll_group_000", 00:17:55.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.828 "listen_address": { 00:17:55.828 "trtype": "TCP", 00:17:55.828 "adrfam": "IPv4", 00:17:55.828 "traddr": "10.0.0.2", 00:17:55.828 "trsvcid": "4420" 00:17:55.828 }, 00:17:55.828 "peer_address": { 00:17:55.828 "trtype": "TCP", 00:17:55.828 "adrfam": "IPv4", 00:17:55.828 "traddr": "10.0.0.1", 00:17:55.828 "trsvcid": "52592" 00:17:55.828 }, 00:17:55.828 "auth": { 00:17:55.828 "state": "completed", 00:17:55.828 "digest": "sha384", 00:17:55.828 "dhgroup": "ffdhe3072" 00:17:55.828 } 00:17:55.828 } 00:17:55.828 ]' 00:17:55.828 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.088 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.088 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.088 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.088 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.088 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.088 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.088 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.349 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:56.349 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:17:56.918 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.918 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.918 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.918 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.918 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.918 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.918 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:56.918 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.180 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.441 00:17:57.441 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.441 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.441 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.441 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.702 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.702 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.702 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.702 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.702 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.702 { 00:17:57.702 "cntlid": 71, 00:17:57.702 "qid": 0, 00:17:57.702 "state": "enabled", 00:17:57.702 "thread": "nvmf_tgt_poll_group_000", 00:17:57.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.702 "listen_address": { 00:17:57.702 "trtype": "TCP", 00:17:57.702 "adrfam": "IPv4", 00:17:57.702 "traddr": "10.0.0.2", 00:17:57.702 "trsvcid": "4420" 00:17:57.702 }, 00:17:57.702 "peer_address": { 00:17:57.702 "trtype": "TCP", 00:17:57.702 "adrfam": "IPv4", 00:17:57.702 "traddr": "10.0.0.1", 00:17:57.702 "trsvcid": "52604" 00:17:57.702 }, 00:17:57.702 "auth": { 00:17:57.702 "state": "completed", 00:17:57.702 "digest": "sha384", 00:17:57.702 "dhgroup": "ffdhe3072" 00:17:57.702 } 00:17:57.702 } 00:17:57.702 ]' 00:17:57.702 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.702 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.702 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.702 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.702 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.702 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.702 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.702 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.962 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:57.962 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:17:58.532 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.532 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.532 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.532 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.532 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.532 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.532 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.532 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.532 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.793 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.054 00:17:59.054 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.054 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.054 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.314 { 00:17:59.314 "cntlid": 73, 00:17:59.314 "qid": 0, 00:17:59.314 "state": "enabled", 00:17:59.314 "thread": "nvmf_tgt_poll_group_000", 00:17:59.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.314 "listen_address": { 00:17:59.314 "trtype": "TCP", 00:17:59.314 "adrfam": "IPv4", 00:17:59.314 "traddr": "10.0.0.2", 00:17:59.314 "trsvcid": "4420" 00:17:59.314 }, 00:17:59.314 "peer_address": { 00:17:59.314 "trtype": "TCP", 00:17:59.314 "adrfam": "IPv4", 00:17:59.314 "traddr": "10.0.0.1", 00:17:59.314 "trsvcid": "52626" 00:17:59.314 }, 00:17:59.314 "auth": { 00:17:59.314 "state": "completed", 00:17:59.314 "digest": "sha384", 00:17:59.314 "dhgroup": "ffdhe4096" 00:17:59.314 } 00:17:59.314 } 00:17:59.314 ]' 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.314 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.574 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:17:59.574 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:00.145 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.145 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.145 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.145 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.145 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.145 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.145 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:00.145 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.406 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.667 00:18:00.667 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.667 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.667 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.928 { 00:18:00.928 "cntlid": 75, 00:18:00.928 "qid": 0, 00:18:00.928 "state": "enabled", 00:18:00.928 "thread": "nvmf_tgt_poll_group_000", 00:18:00.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:00.928 "listen_address": { 00:18:00.928 "trtype": "TCP", 00:18:00.928 "adrfam": "IPv4", 00:18:00.928 "traddr": "10.0.0.2", 00:18:00.928 "trsvcid": "4420" 00:18:00.928 }, 00:18:00.928 "peer_address": { 00:18:00.928 "trtype": "TCP", 00:18:00.928 "adrfam": "IPv4", 00:18:00.928 "traddr": "10.0.0.1", 00:18:00.928 "trsvcid": "52658" 00:18:00.928 }, 00:18:00.928 "auth": { 00:18:00.928 "state": "completed", 00:18:00.928 "digest": "sha384", 00:18:00.928 "dhgroup": "ffdhe4096" 00:18:00.928 } 00:18:00.928 } 00:18:00.928 ]' 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.928 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.189 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:01.189 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:01.760 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.760 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.760 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.760 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.021 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.282 00:18:02.282 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.282 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.282 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.544 { 00:18:02.544 "cntlid": 77, 00:18:02.544 "qid": 0, 00:18:02.544 "state": "enabled", 00:18:02.544 "thread": "nvmf_tgt_poll_group_000", 00:18:02.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.544 "listen_address": { 00:18:02.544 "trtype": "TCP", 00:18:02.544 "adrfam": "IPv4", 00:18:02.544 "traddr": "10.0.0.2", 00:18:02.544 "trsvcid": "4420" 00:18:02.544 }, 00:18:02.544 "peer_address": { 00:18:02.544 "trtype": "TCP", 00:18:02.544 "adrfam": "IPv4", 00:18:02.544 "traddr": "10.0.0.1", 00:18:02.544 "trsvcid": "52690" 00:18:02.544 }, 00:18:02.544 "auth": { 00:18:02.544 "state": "completed", 00:18:02.544 "digest": "sha384", 00:18:02.544 "dhgroup": "ffdhe4096" 00:18:02.544 } 00:18:02.544 } 00:18:02.544 ]' 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.544 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.805 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:02.805 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:03.375 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.636 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.636 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.636 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.636 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.636 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.636 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.636 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.636 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:03.636 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.636 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.636 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:03.636 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.636 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.637 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:03.637 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.637 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.637 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.637 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.637 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.637 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.898 00:18:03.898 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.898 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.898 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.159 { 00:18:04.159 "cntlid": 79, 00:18:04.159 "qid": 0, 00:18:04.159 "state": "enabled", 00:18:04.159 "thread": "nvmf_tgt_poll_group_000", 00:18:04.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.159 "listen_address": { 00:18:04.159 "trtype": "TCP", 00:18:04.159 "adrfam": "IPv4", 00:18:04.159 "traddr": "10.0.0.2", 00:18:04.159 "trsvcid": "4420" 00:18:04.159 }, 00:18:04.159 "peer_address": { 00:18:04.159 "trtype": "TCP", 00:18:04.159 "adrfam": "IPv4", 00:18:04.159 "traddr": "10.0.0.1", 00:18:04.159 "trsvcid": "38658" 00:18:04.159 }, 00:18:04.159 "auth": { 00:18:04.159 "state": "completed", 00:18:04.159 "digest": "sha384", 00:18:04.159 "dhgroup": "ffdhe4096" 00:18:04.159 } 00:18:04.159 } 00:18:04.159 ]' 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.159 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.421 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.421 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.421 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.421 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:04.421 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:04.992 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.992 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.992 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.992 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.992 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.992 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.992 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.992 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.992 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.253 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.513 00:18:05.774 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.774 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.774 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.774 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.774 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.774 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.774 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.774 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.774 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.774 { 00:18:05.774 "cntlid": 81, 00:18:05.774 "qid": 0, 00:18:05.774 "state": "enabled", 00:18:05.774 "thread": "nvmf_tgt_poll_group_000", 00:18:05.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.774 "listen_address": { 00:18:05.774 "trtype": "TCP", 00:18:05.774 "adrfam": "IPv4", 00:18:05.774 "traddr": "10.0.0.2", 00:18:05.774 "trsvcid": "4420" 00:18:05.774 }, 00:18:05.774 "peer_address": { 00:18:05.774 "trtype": "TCP", 00:18:05.774 "adrfam": "IPv4", 00:18:05.774 "traddr": "10.0.0.1", 00:18:05.774 "trsvcid": "38674" 00:18:05.774 }, 00:18:05.774 "auth": { 00:18:05.774 "state": "completed", 00:18:05.774 "digest": "sha384", 00:18:05.774 "dhgroup": "ffdhe6144" 00:18:05.774 } 00:18:05.774 } 00:18:05.774 ]' 00:18:05.774 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.774 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.774 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.035 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.035 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.035 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.035 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.035 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.035 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:06.035 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.977 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.238 00:18:07.238 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.238 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.238 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.498 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.498 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.498 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.498 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.498 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.498 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.498 { 00:18:07.498 "cntlid": 83, 00:18:07.498 "qid": 0, 00:18:07.498 "state": "enabled", 00:18:07.498 "thread": "nvmf_tgt_poll_group_000", 00:18:07.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.498 "listen_address": { 00:18:07.498 "trtype": "TCP", 00:18:07.498 "adrfam": "IPv4", 00:18:07.498 "traddr": "10.0.0.2", 00:18:07.498 "trsvcid": "4420" 00:18:07.498 }, 00:18:07.498 "peer_address": { 00:18:07.498 "trtype": "TCP", 00:18:07.498 "adrfam": "IPv4", 00:18:07.498 "traddr": "10.0.0.1", 00:18:07.498 "trsvcid": "38680" 00:18:07.498 }, 00:18:07.498 "auth": { 00:18:07.498 "state": "completed", 00:18:07.498 "digest": "sha384", 00:18:07.498 "dhgroup": "ffdhe6144" 00:18:07.498 } 00:18:07.498 } 00:18:07.498 ]' 00:18:07.498 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.498 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.498 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.759 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.759 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.759 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.759 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.759 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.759 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:07.759 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:08.699 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.699 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.699 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.699 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.699 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.699 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.700 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:08.700 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.700 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.960 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.220 { 00:18:09.220 "cntlid": 85, 00:18:09.220 "qid": 0, 00:18:09.220 "state": "enabled", 00:18:09.220 "thread": "nvmf_tgt_poll_group_000", 00:18:09.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.220 "listen_address": { 00:18:09.220 "trtype": "TCP", 00:18:09.220 "adrfam": "IPv4", 00:18:09.220 "traddr": "10.0.0.2", 00:18:09.220 "trsvcid": "4420" 00:18:09.220 }, 00:18:09.220 "peer_address": { 00:18:09.220 "trtype": "TCP", 00:18:09.220 "adrfam": "IPv4", 00:18:09.220 "traddr": "10.0.0.1", 00:18:09.220 "trsvcid": "38700" 00:18:09.220 }, 00:18:09.220 "auth": { 00:18:09.220 "state": "completed", 00:18:09.220 "digest": "sha384", 00:18:09.220 "dhgroup": "ffdhe6144" 00:18:09.220 } 00:18:09.220 } 00:18:09.220 ]' 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.220 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.481 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.481 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.481 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.481 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.481 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.481 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:09.742 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:10.314 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.314 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.314 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.314 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.314 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.314 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.314 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:10.314 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.574 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.833 00:18:10.833 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.833 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.833 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.095 { 00:18:11.095 "cntlid": 87, 00:18:11.095 "qid": 0, 00:18:11.095 "state": "enabled", 00:18:11.095 "thread": "nvmf_tgt_poll_group_000", 00:18:11.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.095 "listen_address": { 00:18:11.095 "trtype": "TCP", 00:18:11.095 "adrfam": "IPv4", 00:18:11.095 "traddr": "10.0.0.2", 00:18:11.095 "trsvcid": "4420" 00:18:11.095 }, 00:18:11.095 "peer_address": { 00:18:11.095 "trtype": "TCP", 00:18:11.095 "adrfam": "IPv4", 00:18:11.095 "traddr": "10.0.0.1", 00:18:11.095 "trsvcid": "38734" 00:18:11.095 }, 00:18:11.095 "auth": { 00:18:11.095 "state": "completed", 00:18:11.095 "digest": "sha384", 00:18:11.095 "dhgroup": "ffdhe6144" 00:18:11.095 } 00:18:11.095 } 00:18:11.095 ]' 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.095 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.355 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:11.355 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:11.927 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.927 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.927 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.927 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.927 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.927 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.927 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.927 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.927 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.188 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.760 00:18:12.760 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.760 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.760 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.760 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.760 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.760 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.760 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.760 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.760 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.760 { 00:18:12.760 "cntlid": 89, 00:18:12.760 "qid": 0, 00:18:12.760 "state": "enabled", 00:18:12.760 "thread": "nvmf_tgt_poll_group_000", 00:18:12.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.760 "listen_address": { 00:18:12.760 "trtype": "TCP", 00:18:12.760 "adrfam": "IPv4", 00:18:12.760 "traddr": "10.0.0.2", 00:18:12.760 "trsvcid": "4420" 00:18:12.760 }, 00:18:12.760 "peer_address": { 00:18:12.760 "trtype": "TCP", 00:18:12.760 "adrfam": "IPv4", 00:18:12.760 "traddr": "10.0.0.1", 00:18:12.760 "trsvcid": "38766" 00:18:12.760 }, 00:18:12.760 "auth": { 00:18:12.760 "state": "completed", 00:18:12.760 "digest": "sha384", 00:18:12.760 "dhgroup": "ffdhe8192" 00:18:12.760 } 00:18:12.760 } 00:18:12.760 ]' 00:18:12.760 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.760 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.760 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.021 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.021 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.021 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.021 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.021 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.021 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:13.021 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.960 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.961 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.961 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.961 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.961 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.961 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.961 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.961 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.961 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.531 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.531 { 00:18:14.531 "cntlid": 91, 00:18:14.531 "qid": 0, 00:18:14.531 "state": "enabled", 00:18:14.531 "thread": "nvmf_tgt_poll_group_000", 00:18:14.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.531 "listen_address": { 00:18:14.531 "trtype": "TCP", 00:18:14.531 "adrfam": "IPv4", 00:18:14.531 "traddr": "10.0.0.2", 00:18:14.531 "trsvcid": "4420" 00:18:14.531 }, 00:18:14.531 "peer_address": { 00:18:14.531 "trtype": "TCP", 00:18:14.531 "adrfam": "IPv4", 00:18:14.531 "traddr": "10.0.0.1", 00:18:14.531 "trsvcid": "47924" 00:18:14.531 }, 00:18:14.531 "auth": { 00:18:14.531 "state": "completed", 00:18:14.531 "digest": "sha384", 00:18:14.531 "dhgroup": "ffdhe8192" 00:18:14.531 } 00:18:14.531 } 00:18:14.531 ]' 00:18:14.531 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.791 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.791 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.791 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.791 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.791 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.791 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.791 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.051 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:15.051 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:15.620 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.620 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.620 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.620 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.620 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.620 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.620 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:15.620 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.880 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.140 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.402 { 00:18:16.402 "cntlid": 93, 00:18:16.402 "qid": 0, 00:18:16.402 "state": "enabled", 00:18:16.402 "thread": "nvmf_tgt_poll_group_000", 00:18:16.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.402 "listen_address": { 00:18:16.402 "trtype": "TCP", 00:18:16.402 "adrfam": "IPv4", 00:18:16.402 "traddr": "10.0.0.2", 00:18:16.402 "trsvcid": "4420" 00:18:16.402 }, 00:18:16.402 "peer_address": { 00:18:16.402 "trtype": "TCP", 00:18:16.402 "adrfam": "IPv4", 00:18:16.402 "traddr": "10.0.0.1", 00:18:16.402 "trsvcid": "47946" 00:18:16.402 }, 00:18:16.402 "auth": { 00:18:16.402 "state": "completed", 00:18:16.402 "digest": "sha384", 00:18:16.402 "dhgroup": "ffdhe8192" 00:18:16.402 } 00:18:16.402 } 00:18:16.402 ]' 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.402 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.663 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.663 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.663 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.663 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.663 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.923 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:16.923 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:17.493 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.493 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.493 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.493 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.493 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.493 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.493 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:17.493 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:17.753 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:17.753 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.753 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.753 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.753 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.753 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.753 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.753 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.753 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.753 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.753 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.753 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.753 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.014 00:18:18.281 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.281 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.281 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.281 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.282 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.282 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.282 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.282 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.282 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.282 { 00:18:18.282 "cntlid": 95, 00:18:18.282 "qid": 0, 00:18:18.282 "state": "enabled", 00:18:18.282 "thread": "nvmf_tgt_poll_group_000", 00:18:18.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.282 "listen_address": { 00:18:18.282 "trtype": "TCP", 00:18:18.282 "adrfam": "IPv4", 00:18:18.282 "traddr": "10.0.0.2", 00:18:18.282 "trsvcid": "4420" 00:18:18.282 }, 00:18:18.282 "peer_address": { 00:18:18.282 "trtype": "TCP", 00:18:18.282 "adrfam": "IPv4", 00:18:18.282 "traddr": "10.0.0.1", 00:18:18.282 "trsvcid": "47976" 00:18:18.282 }, 00:18:18.282 "auth": { 00:18:18.282 "state": "completed", 00:18:18.282 "digest": "sha384", 00:18:18.282 "dhgroup": "ffdhe8192" 00:18:18.282 } 00:18:18.282 } 00:18:18.282 ]' 00:18:18.282 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.282 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.282 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.543 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.543 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.543 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.543 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.543 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.543 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:18.543 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:19.484 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.485 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.745 00:18:19.745 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.745 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.745 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.006 { 00:18:20.006 "cntlid": 97, 00:18:20.006 "qid": 0, 00:18:20.006 "state": "enabled", 00:18:20.006 "thread": "nvmf_tgt_poll_group_000", 00:18:20.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.006 "listen_address": { 00:18:20.006 "trtype": "TCP", 00:18:20.006 "adrfam": "IPv4", 00:18:20.006 "traddr": "10.0.0.2", 00:18:20.006 "trsvcid": "4420" 00:18:20.006 }, 00:18:20.006 "peer_address": { 00:18:20.006 "trtype": "TCP", 00:18:20.006 "adrfam": "IPv4", 00:18:20.006 "traddr": "10.0.0.1", 00:18:20.006 "trsvcid": "47984" 00:18:20.006 }, 00:18:20.006 "auth": { 00:18:20.006 "state": "completed", 00:18:20.006 "digest": "sha512", 00:18:20.006 "dhgroup": "null" 00:18:20.006 } 00:18:20.006 } 00:18:20.006 ]' 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.006 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.266 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:20.267 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:20.836 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.836 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.836 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.836 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.836 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.836 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.836 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:20.836 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.097 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.098 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.358 00:18:21.358 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.358 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.358 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.618 { 00:18:21.618 "cntlid": 99, 00:18:21.618 "qid": 0, 00:18:21.618 "state": "enabled", 00:18:21.618 "thread": "nvmf_tgt_poll_group_000", 00:18:21.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.618 "listen_address": { 00:18:21.618 "trtype": "TCP", 00:18:21.618 "adrfam": "IPv4", 00:18:21.618 "traddr": "10.0.0.2", 00:18:21.618 "trsvcid": "4420" 00:18:21.618 }, 00:18:21.618 "peer_address": { 00:18:21.618 "trtype": "TCP", 00:18:21.618 "adrfam": "IPv4", 00:18:21.618 "traddr": "10.0.0.1", 00:18:21.618 "trsvcid": "48016" 00:18:21.618 }, 00:18:21.618 "auth": { 00:18:21.618 "state": "completed", 00:18:21.618 "digest": "sha512", 00:18:21.618 "dhgroup": "null" 00:18:21.618 } 00:18:21.618 } 00:18:21.618 ]' 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:21.618 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.618 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.618 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.618 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.879 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:21.879 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:22.449 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.449 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.449 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.449 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.449 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.449 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.449 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:22.449 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:22.709 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:22.709 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.709 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.709 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:22.709 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:22.709 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.709 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.710 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.710 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.710 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.710 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.710 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.710 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.970 00:18:22.970 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.971 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.971 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.231 { 00:18:23.231 "cntlid": 101, 00:18:23.231 "qid": 0, 00:18:23.231 "state": "enabled", 00:18:23.231 "thread": "nvmf_tgt_poll_group_000", 00:18:23.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.231 "listen_address": { 00:18:23.231 "trtype": "TCP", 00:18:23.231 "adrfam": "IPv4", 00:18:23.231 "traddr": "10.0.0.2", 00:18:23.231 "trsvcid": "4420" 00:18:23.231 }, 00:18:23.231 "peer_address": { 00:18:23.231 "trtype": "TCP", 00:18:23.231 "adrfam": "IPv4", 00:18:23.231 "traddr": "10.0.0.1", 00:18:23.231 "trsvcid": "48038" 00:18:23.231 }, 00:18:23.231 "auth": { 00:18:23.231 "state": "completed", 00:18:23.231 "digest": "sha512", 00:18:23.231 "dhgroup": "null" 00:18:23.231 } 00:18:23.231 } 00:18:23.231 ]' 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.231 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.492 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:23.492 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:24.062 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.062 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.062 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.062 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.062 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.062 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.062 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:24.062 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.323 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.584 00:18:24.584 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.584 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.584 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.846 { 00:18:24.846 "cntlid": 103, 00:18:24.846 "qid": 0, 00:18:24.846 "state": "enabled", 00:18:24.846 "thread": "nvmf_tgt_poll_group_000", 00:18:24.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.846 "listen_address": { 00:18:24.846 "trtype": "TCP", 00:18:24.846 "adrfam": "IPv4", 00:18:24.846 "traddr": "10.0.0.2", 00:18:24.846 "trsvcid": "4420" 00:18:24.846 }, 00:18:24.846 "peer_address": { 00:18:24.846 "trtype": "TCP", 00:18:24.846 "adrfam": "IPv4", 00:18:24.846 "traddr": "10.0.0.1", 00:18:24.846 "trsvcid": "36804" 00:18:24.846 }, 00:18:24.846 "auth": { 00:18:24.846 "state": "completed", 00:18:24.846 "digest": "sha512", 00:18:24.846 "dhgroup": "null" 00:18:24.846 } 00:18:24.846 } 00:18:24.846 ]' 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.846 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.107 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:25.107 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:25.678 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.678 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.678 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.678 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.678 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.678 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.678 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.678 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:25.678 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.939 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.199 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.199 { 00:18:26.199 "cntlid": 105, 00:18:26.199 "qid": 0, 00:18:26.199 "state": "enabled", 00:18:26.199 "thread": "nvmf_tgt_poll_group_000", 00:18:26.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.199 "listen_address": { 00:18:26.199 "trtype": "TCP", 00:18:26.199 "adrfam": "IPv4", 00:18:26.199 "traddr": "10.0.0.2", 00:18:26.199 "trsvcid": "4420" 00:18:26.199 }, 00:18:26.199 "peer_address": { 00:18:26.199 "trtype": "TCP", 00:18:26.199 "adrfam": "IPv4", 00:18:26.199 "traddr": "10.0.0.1", 00:18:26.199 "trsvcid": "36814" 00:18:26.199 }, 00:18:26.199 "auth": { 00:18:26.199 "state": "completed", 00:18:26.199 "digest": "sha512", 00:18:26.199 "dhgroup": "ffdhe2048" 00:18:26.199 } 00:18:26.199 } 00:18:26.199 ]' 00:18:26.199 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.460 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.460 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.460 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.460 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.460 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.460 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.460 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.720 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:26.720 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:27.291 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.291 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.291 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.291 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.291 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.291 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.291 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.291 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.551 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.811 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.811 { 00:18:27.811 "cntlid": 107, 00:18:27.811 "qid": 0, 00:18:27.811 "state": "enabled", 00:18:27.811 "thread": "nvmf_tgt_poll_group_000", 00:18:27.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.811 "listen_address": { 00:18:27.811 "trtype": "TCP", 00:18:27.811 "adrfam": "IPv4", 00:18:27.811 "traddr": "10.0.0.2", 00:18:27.811 "trsvcid": "4420" 00:18:27.811 }, 00:18:27.811 "peer_address": { 00:18:27.811 "trtype": "TCP", 00:18:27.811 "adrfam": "IPv4", 00:18:27.811 "traddr": "10.0.0.1", 00:18:27.811 "trsvcid": "36854" 00:18:27.811 }, 00:18:27.811 "auth": { 00:18:27.811 "state": "completed", 00:18:27.811 "digest": "sha512", 00:18:27.811 "dhgroup": "ffdhe2048" 00:18:27.811 } 00:18:27.811 } 00:18:27.811 ]' 00:18:27.811 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.071 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.071 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.071 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.071 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.071 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.071 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.071 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.331 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:28.331 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:28.902 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.902 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.902 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.902 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.902 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.902 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.902 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.902 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.162 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.162 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.423 { 00:18:29.423 "cntlid": 109, 00:18:29.423 "qid": 0, 00:18:29.423 "state": "enabled", 00:18:29.423 "thread": "nvmf_tgt_poll_group_000", 00:18:29.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.423 "listen_address": { 00:18:29.423 "trtype": "TCP", 00:18:29.423 "adrfam": "IPv4", 00:18:29.423 "traddr": "10.0.0.2", 00:18:29.423 "trsvcid": "4420" 00:18:29.423 }, 00:18:29.423 "peer_address": { 00:18:29.423 "trtype": "TCP", 00:18:29.423 "adrfam": "IPv4", 00:18:29.423 "traddr": "10.0.0.1", 00:18:29.423 "trsvcid": "36892" 00:18:29.423 }, 00:18:29.423 "auth": { 00:18:29.423 "state": "completed", 00:18:29.423 "digest": "sha512", 00:18:29.423 "dhgroup": "ffdhe2048" 00:18:29.423 } 00:18:29.423 } 00:18:29.423 ]' 00:18:29.423 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.683 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.683 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.683 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.683 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.683 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.683 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.683 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.943 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:29.944 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:30.514 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.514 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.514 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.514 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.514 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.514 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.514 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.514 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.774 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.033 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.033 { 00:18:31.033 "cntlid": 111, 00:18:31.033 "qid": 0, 00:18:31.033 "state": "enabled", 00:18:31.033 "thread": "nvmf_tgt_poll_group_000", 00:18:31.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.033 "listen_address": { 00:18:31.033 "trtype": "TCP", 00:18:31.033 "adrfam": "IPv4", 00:18:31.033 "traddr": "10.0.0.2", 00:18:31.033 "trsvcid": "4420" 00:18:31.033 }, 00:18:31.033 "peer_address": { 00:18:31.033 "trtype": "TCP", 00:18:31.033 "adrfam": "IPv4", 00:18:31.033 "traddr": "10.0.0.1", 00:18:31.033 "trsvcid": "36918" 00:18:31.033 }, 00:18:31.033 "auth": { 00:18:31.033 "state": "completed", 00:18:31.033 "digest": "sha512", 00:18:31.033 "dhgroup": "ffdhe2048" 00:18:31.033 } 00:18:31.033 } 00:18:31.033 ]' 00:18:31.033 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.293 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.293 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.293 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.293 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.293 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.293 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.293 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.553 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:31.553 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:32.125 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.125 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.125 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.125 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.125 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.125 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.125 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.125 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:32.125 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.386 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.647 00:18:32.647 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.647 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.647 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.647 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.647 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.647 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.647 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.647 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.647 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.647 { 00:18:32.647 "cntlid": 113, 00:18:32.647 "qid": 0, 00:18:32.647 "state": "enabled", 00:18:32.647 "thread": "nvmf_tgt_poll_group_000", 00:18:32.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.647 "listen_address": { 00:18:32.647 "trtype": "TCP", 00:18:32.647 "adrfam": "IPv4", 00:18:32.647 "traddr": "10.0.0.2", 00:18:32.647 "trsvcid": "4420" 00:18:32.647 }, 00:18:32.647 "peer_address": { 00:18:32.647 "trtype": "TCP", 00:18:32.647 "adrfam": "IPv4", 00:18:32.647 "traddr": "10.0.0.1", 00:18:32.647 "trsvcid": "36946" 00:18:32.647 }, 00:18:32.647 "auth": { 00:18:32.647 "state": "completed", 00:18:32.647 "digest": "sha512", 00:18:32.647 "dhgroup": "ffdhe3072" 00:18:32.647 } 00:18:32.647 } 00:18:32.647 ]' 00:18:32.647 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.909 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.909 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.909 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.909 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.909 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.909 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.909 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.169 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:33.169 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:33.740 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.740 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.740 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.740 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.740 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.740 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.740 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.740 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.001 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.262 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.262 { 00:18:34.262 "cntlid": 115, 00:18:34.262 "qid": 0, 00:18:34.262 "state": "enabled", 00:18:34.262 "thread": "nvmf_tgt_poll_group_000", 00:18:34.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.262 "listen_address": { 00:18:34.262 "trtype": "TCP", 00:18:34.262 "adrfam": "IPv4", 00:18:34.262 "traddr": "10.0.0.2", 00:18:34.262 "trsvcid": "4420" 00:18:34.262 }, 00:18:34.262 "peer_address": { 00:18:34.262 "trtype": "TCP", 00:18:34.262 "adrfam": "IPv4", 00:18:34.262 "traddr": "10.0.0.1", 00:18:34.262 "trsvcid": "59940" 00:18:34.262 }, 00:18:34.262 "auth": { 00:18:34.262 "state": "completed", 00:18:34.262 "digest": "sha512", 00:18:34.262 "dhgroup": "ffdhe3072" 00:18:34.262 } 00:18:34.262 } 00:18:34.262 ]' 00:18:34.262 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.522 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.522 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.522 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.522 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.522 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.522 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.522 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.782 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:34.783 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:35.354 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.354 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.354 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.354 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.354 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.354 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.354 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:35.354 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.615 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.615 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.877 { 00:18:35.877 "cntlid": 117, 00:18:35.877 "qid": 0, 00:18:35.877 "state": "enabled", 00:18:35.877 "thread": "nvmf_tgt_poll_group_000", 00:18:35.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.877 "listen_address": { 00:18:35.877 "trtype": "TCP", 00:18:35.877 "adrfam": "IPv4", 00:18:35.877 "traddr": "10.0.0.2", 00:18:35.877 "trsvcid": "4420" 00:18:35.877 }, 00:18:35.877 "peer_address": { 00:18:35.877 "trtype": "TCP", 00:18:35.877 "adrfam": "IPv4", 00:18:35.877 "traddr": "10.0.0.1", 00:18:35.877 "trsvcid": "59962" 00:18:35.877 }, 00:18:35.877 "auth": { 00:18:35.877 "state": "completed", 00:18:35.877 "digest": "sha512", 00:18:35.877 "dhgroup": "ffdhe3072" 00:18:35.877 } 00:18:35.877 } 00:18:35.877 ]' 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.877 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.138 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.138 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.138 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.138 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.138 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.138 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.399 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:36.399 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:36.971 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.971 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.971 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.971 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.971 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.971 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.971 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:36.971 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.232 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.232 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.494 { 00:18:37.494 "cntlid": 119, 00:18:37.494 "qid": 0, 00:18:37.494 "state": "enabled", 00:18:37.494 "thread": "nvmf_tgt_poll_group_000", 00:18:37.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.494 "listen_address": { 00:18:37.494 "trtype": "TCP", 00:18:37.494 "adrfam": "IPv4", 00:18:37.494 "traddr": "10.0.0.2", 00:18:37.494 "trsvcid": "4420" 00:18:37.494 }, 00:18:37.494 "peer_address": { 00:18:37.494 "trtype": "TCP", 00:18:37.494 "adrfam": "IPv4", 00:18:37.494 "traddr": "10.0.0.1", 00:18:37.494 "trsvcid": "59988" 00:18:37.494 }, 00:18:37.494 "auth": { 00:18:37.494 "state": "completed", 00:18:37.494 "digest": "sha512", 00:18:37.494 "dhgroup": "ffdhe3072" 00:18:37.494 } 00:18:37.494 } 00:18:37.494 ]' 00:18:37.494 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.755 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.755 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.755 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.755 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.755 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.755 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.755 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.016 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:38.016 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:38.587 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.587 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.587 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.587 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.587 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.587 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.587 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.588 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:38.588 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.849 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.111 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.111 { 00:18:39.111 "cntlid": 121, 00:18:39.111 "qid": 0, 00:18:39.111 "state": "enabled", 00:18:39.111 "thread": "nvmf_tgt_poll_group_000", 00:18:39.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.111 "listen_address": { 00:18:39.111 "trtype": "TCP", 00:18:39.111 "adrfam": "IPv4", 00:18:39.111 "traddr": "10.0.0.2", 00:18:39.111 "trsvcid": "4420" 00:18:39.111 }, 00:18:39.111 "peer_address": { 00:18:39.111 "trtype": "TCP", 00:18:39.111 "adrfam": "IPv4", 00:18:39.111 "traddr": "10.0.0.1", 00:18:39.111 "trsvcid": "60004" 00:18:39.111 }, 00:18:39.111 "auth": { 00:18:39.111 "state": "completed", 00:18:39.111 "digest": "sha512", 00:18:39.111 "dhgroup": "ffdhe4096" 00:18:39.111 } 00:18:39.111 } 00:18:39.111 ]' 00:18:39.111 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.372 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.372 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.372 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.372 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.372 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.372 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.372 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.632 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:39.632 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:40.205 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.205 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.205 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.205 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.205 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.205 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.205 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:40.205 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.467 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.728 00:18:40.728 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.728 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.728 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.990 { 00:18:40.990 "cntlid": 123, 00:18:40.990 "qid": 0, 00:18:40.990 "state": "enabled", 00:18:40.990 "thread": "nvmf_tgt_poll_group_000", 00:18:40.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.990 "listen_address": { 00:18:40.990 "trtype": "TCP", 00:18:40.990 "adrfam": "IPv4", 00:18:40.990 "traddr": "10.0.0.2", 00:18:40.990 "trsvcid": "4420" 00:18:40.990 }, 00:18:40.990 "peer_address": { 00:18:40.990 "trtype": "TCP", 00:18:40.990 "adrfam": "IPv4", 00:18:40.990 "traddr": "10.0.0.1", 00:18:40.990 "trsvcid": "60036" 00:18:40.990 }, 00:18:40.990 "auth": { 00:18:40.990 "state": "completed", 00:18:40.990 "digest": "sha512", 00:18:40.990 "dhgroup": "ffdhe4096" 00:18:40.990 } 00:18:40.990 } 00:18:40.990 ]' 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.990 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.251 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:41.251 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:41.822 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.822 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.822 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.822 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.822 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.822 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.822 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:41.822 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.083 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.345 00:18:42.345 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.345 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.345 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.606 { 00:18:42.606 "cntlid": 125, 00:18:42.606 "qid": 0, 00:18:42.606 "state": "enabled", 00:18:42.606 "thread": "nvmf_tgt_poll_group_000", 00:18:42.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.606 "listen_address": { 00:18:42.606 "trtype": "TCP", 00:18:42.606 "adrfam": "IPv4", 00:18:42.606 "traddr": "10.0.0.2", 00:18:42.606 "trsvcid": "4420" 00:18:42.606 }, 00:18:42.606 "peer_address": { 00:18:42.606 "trtype": "TCP", 00:18:42.606 "adrfam": "IPv4", 00:18:42.606 "traddr": "10.0.0.1", 00:18:42.606 "trsvcid": "60056" 00:18:42.606 }, 00:18:42.606 "auth": { 00:18:42.606 "state": "completed", 00:18:42.606 "digest": "sha512", 00:18:42.606 "dhgroup": "ffdhe4096" 00:18:42.606 } 00:18:42.606 } 00:18:42.606 ]' 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.606 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.867 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:42.867 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:43.439 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.439 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.439 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.439 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.439 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.439 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.439 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.439 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.731 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.031 00:18:44.031 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.031 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.031 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.031 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.031 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.031 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.031 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.337 { 00:18:44.337 "cntlid": 127, 00:18:44.337 "qid": 0, 00:18:44.337 "state": "enabled", 00:18:44.337 "thread": "nvmf_tgt_poll_group_000", 00:18:44.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.337 "listen_address": { 00:18:44.337 "trtype": "TCP", 00:18:44.337 "adrfam": "IPv4", 00:18:44.337 "traddr": "10.0.0.2", 00:18:44.337 "trsvcid": "4420" 00:18:44.337 }, 00:18:44.337 "peer_address": { 00:18:44.337 "trtype": "TCP", 00:18:44.337 "adrfam": "IPv4", 00:18:44.337 "traddr": "10.0.0.1", 00:18:44.337 "trsvcid": "56202" 00:18:44.337 }, 00:18:44.337 "auth": { 00:18:44.337 "state": "completed", 00:18:44.337 "digest": "sha512", 00:18:44.337 "dhgroup": "ffdhe4096" 00:18:44.337 } 00:18:44.337 } 00:18:44.337 ]' 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.337 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.632 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:44.632 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.232 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.805 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.805 { 00:18:45.805 "cntlid": 129, 00:18:45.805 "qid": 0, 00:18:45.805 "state": "enabled", 00:18:45.805 "thread": "nvmf_tgt_poll_group_000", 00:18:45.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.805 "listen_address": { 00:18:45.805 "trtype": "TCP", 00:18:45.805 "adrfam": "IPv4", 00:18:45.805 "traddr": "10.0.0.2", 00:18:45.805 "trsvcid": "4420" 00:18:45.805 }, 00:18:45.805 "peer_address": { 00:18:45.805 "trtype": "TCP", 00:18:45.805 "adrfam": "IPv4", 00:18:45.805 "traddr": "10.0.0.1", 00:18:45.805 "trsvcid": "56238" 00:18:45.805 }, 00:18:45.805 "auth": { 00:18:45.805 "state": "completed", 00:18:45.805 "digest": "sha512", 00:18:45.805 "dhgroup": "ffdhe6144" 00:18:45.805 } 00:18:45.805 } 00:18:45.805 ]' 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.805 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.066 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:46.066 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.066 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.066 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.066 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.327 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:46.327 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:46.898 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.898 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.898 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.898 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.898 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.898 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.898 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.898 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.159 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.420 00:18:47.420 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.420 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.420 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.681 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.681 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.681 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.681 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.681 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.681 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.681 { 00:18:47.681 "cntlid": 131, 00:18:47.681 "qid": 0, 00:18:47.681 "state": "enabled", 00:18:47.681 "thread": "nvmf_tgt_poll_group_000", 00:18:47.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:47.681 "listen_address": { 00:18:47.681 "trtype": "TCP", 00:18:47.681 "adrfam": "IPv4", 00:18:47.681 "traddr": "10.0.0.2", 00:18:47.681 "trsvcid": "4420" 00:18:47.681 }, 00:18:47.681 "peer_address": { 00:18:47.681 "trtype": "TCP", 00:18:47.681 "adrfam": "IPv4", 00:18:47.681 "traddr": "10.0.0.1", 00:18:47.681 "trsvcid": "56266" 00:18:47.681 }, 00:18:47.681 "auth": { 00:18:47.681 "state": "completed", 00:18:47.681 "digest": "sha512", 00:18:47.681 "dhgroup": "ffdhe6144" 00:18:47.681 } 00:18:47.681 } 00:18:47.681 ]' 00:18:47.681 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.681 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.681 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.681 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.681 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.681 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.681 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.681 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.941 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:47.941 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:48.512 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.512 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.512 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.512 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.513 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.513 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.513 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.513 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.774 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.035 00:18:49.035 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.035 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.035 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.296 { 00:18:49.296 "cntlid": 133, 00:18:49.296 "qid": 0, 00:18:49.296 "state": "enabled", 00:18:49.296 "thread": "nvmf_tgt_poll_group_000", 00:18:49.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:49.296 "listen_address": { 00:18:49.296 "trtype": "TCP", 00:18:49.296 "adrfam": "IPv4", 00:18:49.296 "traddr": "10.0.0.2", 00:18:49.296 "trsvcid": "4420" 00:18:49.296 }, 00:18:49.296 "peer_address": { 00:18:49.296 "trtype": "TCP", 00:18:49.296 "adrfam": "IPv4", 00:18:49.296 "traddr": "10.0.0.1", 00:18:49.296 "trsvcid": "56288" 00:18:49.296 }, 00:18:49.296 "auth": { 00:18:49.296 "state": "completed", 00:18:49.296 "digest": "sha512", 00:18:49.296 "dhgroup": "ffdhe6144" 00:18:49.296 } 00:18:49.296 } 00:18:49.296 ]' 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.296 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.557 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.557 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.557 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.557 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:49.557 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:50.127 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.387 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.647 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.908 { 00:18:50.908 "cntlid": 135, 00:18:50.908 "qid": 0, 00:18:50.908 "state": "enabled", 00:18:50.908 "thread": "nvmf_tgt_poll_group_000", 00:18:50.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:50.908 "listen_address": { 00:18:50.908 "trtype": "TCP", 00:18:50.908 "adrfam": "IPv4", 00:18:50.908 "traddr": "10.0.0.2", 00:18:50.908 "trsvcid": "4420" 00:18:50.908 }, 00:18:50.908 "peer_address": { 00:18:50.908 "trtype": "TCP", 00:18:50.908 "adrfam": "IPv4", 00:18:50.908 "traddr": "10.0.0.1", 00:18:50.908 "trsvcid": "56306" 00:18:50.908 }, 00:18:50.908 "auth": { 00:18:50.908 "state": "completed", 00:18:50.908 "digest": "sha512", 00:18:50.908 "dhgroup": "ffdhe6144" 00:18:50.908 } 00:18:50.908 } 00:18:50.908 ]' 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.908 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.169 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.169 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.169 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.169 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.169 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.169 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.169 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:51.169 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.112 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.683 00:18:52.683 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.684 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.684 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.945 { 00:18:52.945 "cntlid": 137, 00:18:52.945 "qid": 0, 00:18:52.945 "state": "enabled", 00:18:52.945 "thread": "nvmf_tgt_poll_group_000", 00:18:52.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:52.945 "listen_address": { 00:18:52.945 "trtype": "TCP", 00:18:52.945 "adrfam": "IPv4", 00:18:52.945 "traddr": "10.0.0.2", 00:18:52.945 "trsvcid": "4420" 00:18:52.945 }, 00:18:52.945 "peer_address": { 00:18:52.945 "trtype": "TCP", 00:18:52.945 "adrfam": "IPv4", 00:18:52.945 "traddr": "10.0.0.1", 00:18:52.945 "trsvcid": "56344" 00:18:52.945 }, 00:18:52.945 "auth": { 00:18:52.945 "state": "completed", 00:18:52.945 "digest": "sha512", 00:18:52.945 "dhgroup": "ffdhe8192" 00:18:52.945 } 00:18:52.945 } 00:18:52.945 ]' 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.945 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.206 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:53.206 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:18:53.776 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.776 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.776 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.776 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.776 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.776 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.776 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:53.776 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.036 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.607 00:18:54.607 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.607 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.607 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.607 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.607 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.607 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.607 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.607 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.607 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.607 { 00:18:54.607 "cntlid": 139, 00:18:54.607 "qid": 0, 00:18:54.607 "state": "enabled", 00:18:54.607 "thread": "nvmf_tgt_poll_group_000", 00:18:54.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:54.607 "listen_address": { 00:18:54.607 "trtype": "TCP", 00:18:54.607 "adrfam": "IPv4", 00:18:54.607 "traddr": "10.0.0.2", 00:18:54.607 "trsvcid": "4420" 00:18:54.607 }, 00:18:54.607 "peer_address": { 00:18:54.607 "trtype": "TCP", 00:18:54.607 "adrfam": "IPv4", 00:18:54.607 "traddr": "10.0.0.1", 00:18:54.607 "trsvcid": "57238" 00:18:54.607 }, 00:18:54.607 "auth": { 00:18:54.607 "state": "completed", 00:18:54.607 "digest": "sha512", 00:18:54.607 "dhgroup": "ffdhe8192" 00:18:54.607 } 00:18:54.607 } 00:18:54.607 ]' 00:18:54.607 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.608 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.608 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.868 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.868 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.868 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.868 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.868 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.868 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:54.868 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: --dhchap-ctrl-secret DHHC-1:02:NzgwZmVjMjJlZWQwYjg1NjU3MDY0ZDc3NjBkYjNhMmFmOWEzZGJjMWQ4NjhkMDQxQ0TRfQ==: 00:18:55.811 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.811 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.811 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.811 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.811 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.811 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.811 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:55.811 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.811 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.382 00:18:56.382 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.382 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.382 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.643 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.643 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.643 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.643 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.643 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.643 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.643 { 00:18:56.643 "cntlid": 141, 00:18:56.643 "qid": 0, 00:18:56.643 "state": "enabled", 00:18:56.643 "thread": "nvmf_tgt_poll_group_000", 00:18:56.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.643 "listen_address": { 00:18:56.643 "trtype": "TCP", 00:18:56.643 "adrfam": "IPv4", 00:18:56.643 "traddr": "10.0.0.2", 00:18:56.643 "trsvcid": "4420" 00:18:56.643 }, 00:18:56.643 "peer_address": { 00:18:56.643 "trtype": "TCP", 00:18:56.643 "adrfam": "IPv4", 00:18:56.643 "traddr": "10.0.0.1", 00:18:56.643 "trsvcid": "57272" 00:18:56.643 }, 00:18:56.643 "auth": { 00:18:56.643 "state": "completed", 00:18:56.643 "digest": "sha512", 00:18:56.643 "dhgroup": "ffdhe8192" 00:18:56.643 } 00:18:56.643 } 00:18:56.643 ]' 00:18:56.643 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.643 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.644 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.644 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.644 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.644 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.644 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.644 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.904 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:56.904 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:01:MWNjMjIwMDg2ZmNmMjJmZDllYzRiYzMxZTFmMzQzYjZA4e7N: 00:18:57.473 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.473 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.474 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.474 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.474 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.474 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.474 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:57.474 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.734 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.304 00:18:58.304 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.304 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.304 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.304 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.304 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.304 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.304 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.304 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.304 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.304 { 00:18:58.304 "cntlid": 143, 00:18:58.304 "qid": 0, 00:18:58.304 "state": "enabled", 00:18:58.304 "thread": "nvmf_tgt_poll_group_000", 00:18:58.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.304 "listen_address": { 00:18:58.304 "trtype": "TCP", 00:18:58.304 "adrfam": "IPv4", 00:18:58.304 "traddr": "10.0.0.2", 00:18:58.304 "trsvcid": "4420" 00:18:58.304 }, 00:18:58.305 "peer_address": { 00:18:58.305 "trtype": "TCP", 00:18:58.305 "adrfam": "IPv4", 00:18:58.305 "traddr": "10.0.0.1", 00:18:58.305 "trsvcid": "57288" 00:18:58.305 }, 00:18:58.305 "auth": { 00:18:58.305 "state": "completed", 00:18:58.305 "digest": "sha512", 00:18:58.305 "dhgroup": "ffdhe8192" 00:18:58.305 } 00:18:58.305 } 00:18:58.305 ]' 00:18:58.305 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.305 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.305 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.567 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.567 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.567 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.567 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.567 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.828 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:58.828 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:59.400 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.662 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.923 00:18:59.923 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.923 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.923 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.182 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.182 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.182 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.182 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.182 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.182 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.182 { 00:19:00.182 "cntlid": 145, 00:19:00.182 "qid": 0, 00:19:00.182 "state": "enabled", 00:19:00.182 "thread": "nvmf_tgt_poll_group_000", 00:19:00.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:00.182 "listen_address": { 00:19:00.182 "trtype": "TCP", 00:19:00.182 "adrfam": "IPv4", 00:19:00.182 "traddr": "10.0.0.2", 00:19:00.182 "trsvcid": "4420" 00:19:00.182 }, 00:19:00.182 "peer_address": { 00:19:00.182 "trtype": "TCP", 00:19:00.182 "adrfam": "IPv4", 00:19:00.182 "traddr": "10.0.0.1", 00:19:00.182 "trsvcid": "57314" 00:19:00.182 }, 00:19:00.182 "auth": { 00:19:00.182 "state": "completed", 00:19:00.182 "digest": "sha512", 00:19:00.182 "dhgroup": "ffdhe8192" 00:19:00.182 } 00:19:00.182 } 00:19:00.182 ]' 00:19:00.183 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.183 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.183 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.183 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.183 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.442 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.442 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.442 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.442 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:19:00.442 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZjZlNDMzY2IyOWZjODc3ODIyYTlkMjAyY2Q5MzNjYjI1MWE1MmFmODU2N2UyNWQ2LnID9g==: --dhchap-ctrl-secret DHHC-1:03:OWY3NGQ2MmUyNGE5NDMzYTgyMWNiNWY3YzI5NjRmNTI1YTMxZjA0OGM5YmQwOGUzYjc3ZDI2NmI2OGFhYWZmY5kc9d0=: 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:01.384 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:01.645 request: 00:19:01.645 { 00:19:01.645 "name": "nvme0", 00:19:01.645 "trtype": "tcp", 00:19:01.645 "traddr": "10.0.0.2", 00:19:01.645 "adrfam": "ipv4", 00:19:01.645 "trsvcid": "4420", 00:19:01.645 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:01.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.645 "prchk_reftag": false, 00:19:01.645 "prchk_guard": false, 00:19:01.645 "hdgst": false, 00:19:01.645 "ddgst": false, 00:19:01.645 "dhchap_key": "key2", 00:19:01.645 "allow_unrecognized_csi": false, 00:19:01.645 "method": "bdev_nvme_attach_controller", 00:19:01.645 "req_id": 1 00:19:01.645 } 00:19:01.645 Got JSON-RPC error response 00:19:01.645 response: 00:19:01.645 { 00:19:01.645 "code": -5, 00:19:01.645 "message": "Input/output error" 00:19:01.645 } 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:01.645 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:02.214 request: 00:19:02.214 { 00:19:02.214 "name": "nvme0", 00:19:02.214 "trtype": "tcp", 00:19:02.214 "traddr": "10.0.0.2", 00:19:02.214 "adrfam": "ipv4", 00:19:02.214 "trsvcid": "4420", 00:19:02.214 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:02.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:02.214 "prchk_reftag": false, 00:19:02.214 "prchk_guard": false, 00:19:02.214 "hdgst": false, 00:19:02.214 "ddgst": false, 00:19:02.214 "dhchap_key": "key1", 00:19:02.214 "dhchap_ctrlr_key": "ckey2", 00:19:02.214 "allow_unrecognized_csi": false, 00:19:02.214 "method": "bdev_nvme_attach_controller", 00:19:02.214 "req_id": 1 00:19:02.214 } 00:19:02.214 Got JSON-RPC error response 00:19:02.214 response: 00:19:02.214 { 00:19:02.214 "code": -5, 00:19:02.214 "message": "Input/output error" 00:19:02.214 } 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.214 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.474 request: 00:19:02.474 { 00:19:02.474 "name": "nvme0", 00:19:02.474 "trtype": "tcp", 00:19:02.474 "traddr": "10.0.0.2", 00:19:02.474 "adrfam": "ipv4", 00:19:02.474 "trsvcid": "4420", 00:19:02.474 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:02.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:02.474 "prchk_reftag": false, 00:19:02.474 "prchk_guard": false, 00:19:02.474 "hdgst": false, 00:19:02.474 "ddgst": false, 00:19:02.474 "dhchap_key": "key1", 00:19:02.474 "dhchap_ctrlr_key": "ckey1", 00:19:02.474 "allow_unrecognized_csi": false, 00:19:02.474 "method": "bdev_nvme_attach_controller", 00:19:02.474 "req_id": 1 00:19:02.474 } 00:19:02.474 Got JSON-RPC error response 00:19:02.474 response: 00:19:02.474 { 00:19:02.474 "code": -5, 00:19:02.474 "message": "Input/output error" 00:19:02.474 } 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1959734 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1959734 ']' 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1959734 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.734 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1959734 00:19:02.734 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.734 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1959734' 00:19:02.735 killing process with pid 1959734 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1959734 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1959734 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1985598 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1985598 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1985598 ']' 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.735 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1985598 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1985598 ']' 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.995 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.256 null0 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KtE 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.nVa ]] 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nVa 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.1lY 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.256 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.hnu ]] 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hnu 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.czZ 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.vka ]] 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vka 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.KMh 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.517 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.088 nvme0n1 00:19:04.088 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.088 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.088 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.349 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.349 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.349 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.349 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.349 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.349 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.349 { 00:19:04.349 "cntlid": 1, 00:19:04.349 "qid": 0, 00:19:04.349 "state": "enabled", 00:19:04.349 "thread": "nvmf_tgt_poll_group_000", 00:19:04.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:04.349 "listen_address": { 00:19:04.349 "trtype": "TCP", 00:19:04.349 "adrfam": "IPv4", 00:19:04.349 "traddr": "10.0.0.2", 00:19:04.349 "trsvcid": "4420" 00:19:04.349 }, 00:19:04.349 "peer_address": { 00:19:04.349 "trtype": "TCP", 00:19:04.349 "adrfam": "IPv4", 00:19:04.349 "traddr": "10.0.0.1", 00:19:04.349 "trsvcid": "47278" 00:19:04.349 }, 00:19:04.349 "auth": { 00:19:04.349 "state": "completed", 00:19:04.349 "digest": "sha512", 00:19:04.349 "dhgroup": "ffdhe8192" 00:19:04.349 } 00:19:04.349 } 00:19:04.349 ]' 00:19:04.349 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.349 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.349 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.610 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.610 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.610 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.610 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.610 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.610 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:19:04.610 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:19:05.551 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.551 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.551 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.551 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.551 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.551 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.552 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.812 request: 00:19:05.812 { 00:19:05.812 "name": "nvme0", 00:19:05.812 "trtype": "tcp", 00:19:05.812 "traddr": "10.0.0.2", 00:19:05.812 "adrfam": "ipv4", 00:19:05.812 "trsvcid": "4420", 00:19:05.812 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:05.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:05.813 "prchk_reftag": false, 00:19:05.813 "prchk_guard": false, 00:19:05.813 "hdgst": false, 00:19:05.813 "ddgst": false, 00:19:05.813 "dhchap_key": "key3", 00:19:05.813 "allow_unrecognized_csi": false, 00:19:05.813 "method": "bdev_nvme_attach_controller", 00:19:05.813 "req_id": 1 00:19:05.813 } 00:19:05.813 Got JSON-RPC error response 00:19:05.813 response: 00:19:05.813 { 00:19:05.813 "code": -5, 00:19:05.813 "message": "Input/output error" 00:19:05.813 } 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.813 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.073 request: 00:19:06.073 { 00:19:06.073 "name": "nvme0", 00:19:06.073 "trtype": "tcp", 00:19:06.073 "traddr": "10.0.0.2", 00:19:06.073 "adrfam": "ipv4", 00:19:06.073 "trsvcid": "4420", 00:19:06.073 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:06.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.073 "prchk_reftag": false, 00:19:06.073 "prchk_guard": false, 00:19:06.073 "hdgst": false, 00:19:06.073 "ddgst": false, 00:19:06.073 "dhchap_key": "key3", 00:19:06.073 "allow_unrecognized_csi": false, 00:19:06.073 "method": "bdev_nvme_attach_controller", 00:19:06.073 "req_id": 1 00:19:06.073 } 00:19:06.073 Got JSON-RPC error response 00:19:06.073 response: 00:19:06.073 { 00:19:06.073 "code": -5, 00:19:06.073 "message": "Input/output error" 00:19:06.073 } 00:19:06.073 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:06.073 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.073 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.073 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.073 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:06.073 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:06.073 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:06.073 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.073 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.074 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:06.334 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:06.595 request: 00:19:06.595 { 00:19:06.595 "name": "nvme0", 00:19:06.595 "trtype": "tcp", 00:19:06.595 "traddr": "10.0.0.2", 00:19:06.595 "adrfam": "ipv4", 00:19:06.595 "trsvcid": "4420", 00:19:06.595 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:06.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.595 "prchk_reftag": false, 00:19:06.595 "prchk_guard": false, 00:19:06.595 "hdgst": false, 00:19:06.595 "ddgst": false, 00:19:06.595 "dhchap_key": "key0", 00:19:06.595 "dhchap_ctrlr_key": "key1", 00:19:06.595 "allow_unrecognized_csi": false, 00:19:06.595 "method": "bdev_nvme_attach_controller", 00:19:06.595 "req_id": 1 00:19:06.595 } 00:19:06.595 Got JSON-RPC error response 00:19:06.595 response: 00:19:06.595 { 00:19:06.595 "code": -5, 00:19:06.595 "message": "Input/output error" 00:19:06.595 } 00:19:06.595 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:06.595 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.595 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.595 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.595 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:06.595 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:06.595 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:06.856 nvme0n1 00:19:06.856 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:06.856 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:06.856 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.117 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.117 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.117 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.377 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:07.377 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.377 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.377 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.377 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:07.377 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:07.377 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:07.946 nvme0n1 00:19:07.946 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:07.946 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:07.946 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.206 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.206 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:08.206 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.206 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.206 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.206 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:08.206 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:08.206 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.467 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.467 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:19:08.467 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: --dhchap-ctrl-secret DHHC-1:03:NjMyNDg4NjdhNzIzMWU0NDVjMWU4YWM5YWUxMWQ5YzI4MTk4NTE4ODg1MDE2MTRlMjVkMDY5NDVkNTg5YmIyMpXudFA=: 00:19:09.040 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:09.040 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:09.040 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:09.040 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:09.040 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:09.040 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:09.040 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:09.040 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.040 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:09.301 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:09.562 request: 00:19:09.562 { 00:19:09.562 "name": "nvme0", 00:19:09.562 "trtype": "tcp", 00:19:09.562 "traddr": "10.0.0.2", 00:19:09.562 "adrfam": "ipv4", 00:19:09.562 "trsvcid": "4420", 00:19:09.562 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.562 "prchk_reftag": false, 00:19:09.562 "prchk_guard": false, 00:19:09.562 "hdgst": false, 00:19:09.562 "ddgst": false, 00:19:09.562 "dhchap_key": "key1", 00:19:09.562 "allow_unrecognized_csi": false, 00:19:09.562 "method": "bdev_nvme_attach_controller", 00:19:09.562 "req_id": 1 00:19:09.562 } 00:19:09.562 Got JSON-RPC error response 00:19:09.562 response: 00:19:09.562 { 00:19:09.562 "code": -5, 00:19:09.562 "message": "Input/output error" 00:19:09.562 } 00:19:09.822 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:09.822 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.822 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.822 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.822 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:09.822 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:09.822 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:10.393 nvme0n1 00:19:10.393 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:10.393 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:10.393 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.653 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.653 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.653 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.913 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.913 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.913 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.913 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.913 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:10.913 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:10.913 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:10.913 nvme0n1 00:19:11.173 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:11.173 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:11.173 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.173 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.173 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.173 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: '' 2s 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: ]] 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDVhMTViMThmOGRhZDk5ZWUzOThjYmY1MzgwMTgwYzJ1AL1b: 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:11.433 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: 2s 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: ]] 00:19:13.343 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OWU3MzFhMzQwOWU0MmRkNWI4YjU0NDllMmJhMWJlYWQyNjhlYWNhMzgxMjIzYWIz89SK9A==: 00:19:13.602 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:13.602 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:15.513 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:16.453 nvme0n1 00:19:16.453 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:16.453 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.453 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.453 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.453 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:16.453 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:16.713 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:16.713 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:16.713 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.973 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.973 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.973 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.973 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.973 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.973 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:16.973 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:17.233 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:17.802 request: 00:19:17.802 { 00:19:17.802 "name": "nvme0", 00:19:17.802 "dhchap_key": "key1", 00:19:17.802 "dhchap_ctrlr_key": "key3", 00:19:17.802 "method": "bdev_nvme_set_keys", 00:19:17.802 "req_id": 1 00:19:17.802 } 00:19:17.802 Got JSON-RPC error response 00:19:17.802 response: 00:19:17.802 { 00:19:17.802 "code": -13, 00:19:17.802 "message": "Permission denied" 00:19:17.802 } 00:19:17.802 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:17.802 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:17.802 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:17.802 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:17.802 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:17.802 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:17.802 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.802 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:17.802 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:19.186 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:19.757 nvme0n1 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:19.757 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:20.328 request: 00:19:20.328 { 00:19:20.328 "name": "nvme0", 00:19:20.328 "dhchap_key": "key2", 00:19:20.328 "dhchap_ctrlr_key": "key0", 00:19:20.328 "method": "bdev_nvme_set_keys", 00:19:20.328 "req_id": 1 00:19:20.328 } 00:19:20.328 Got JSON-RPC error response 00:19:20.328 response: 00:19:20.328 { 00:19:20.328 "code": -13, 00:19:20.328 "message": "Permission denied" 00:19:20.328 } 00:19:20.328 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:20.328 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.328 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.328 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.328 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:20.328 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:20.328 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.589 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:20.589 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:21.534 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:21.534 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:21.534 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1959924 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1959924 ']' 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1959924 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1959924 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1959924' 00:19:21.795 killing process with pid 1959924 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1959924 00:19:21.795 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1959924 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:22.056 rmmod nvme_tcp 00:19:22.056 rmmod nvme_fabrics 00:19:22.056 rmmod nvme_keyring 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1985598 ']' 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1985598 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1985598 ']' 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1985598 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1985598 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1985598' 00:19:22.056 killing process with pid 1985598 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1985598 00:19:22.056 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1985598 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.317 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.232 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:24.232 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.KtE /tmp/spdk.key-sha256.1lY /tmp/spdk.key-sha384.czZ /tmp/spdk.key-sha512.KMh /tmp/spdk.key-sha512.nVa /tmp/spdk.key-sha384.hnu /tmp/spdk.key-sha256.vka '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:24.232 00:19:24.232 real 2m36.205s 00:19:24.232 user 5m51.793s 00:19:24.232 sys 0m24.567s 00:19:24.232 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.232 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.232 ************************************ 00:19:24.232 END TEST nvmf_auth_target 00:19:24.232 ************************************ 00:19:24.232 18:18:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:24.232 18:18:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:24.232 18:18:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:24.232 18:18:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.232 18:18:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.494 ************************************ 00:19:24.494 START TEST nvmf_bdevio_no_huge 00:19:24.494 ************************************ 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:24.494 * Looking for test storage... 00:19:24.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.494 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:24.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.495 --rc genhtml_branch_coverage=1 00:19:24.495 --rc genhtml_function_coverage=1 00:19:24.495 --rc genhtml_legend=1 00:19:24.495 --rc geninfo_all_blocks=1 00:19:24.495 --rc geninfo_unexecuted_blocks=1 00:19:24.495 00:19:24.495 ' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:24.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.495 --rc genhtml_branch_coverage=1 00:19:24.495 --rc genhtml_function_coverage=1 00:19:24.495 --rc genhtml_legend=1 00:19:24.495 --rc geninfo_all_blocks=1 00:19:24.495 --rc geninfo_unexecuted_blocks=1 00:19:24.495 00:19:24.495 ' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:24.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.495 --rc genhtml_branch_coverage=1 00:19:24.495 --rc genhtml_function_coverage=1 00:19:24.495 --rc genhtml_legend=1 00:19:24.495 --rc geninfo_all_blocks=1 00:19:24.495 --rc geninfo_unexecuted_blocks=1 00:19:24.495 00:19:24.495 ' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:24.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.495 --rc genhtml_branch_coverage=1 00:19:24.495 --rc genhtml_function_coverage=1 00:19:24.495 --rc genhtml_legend=1 00:19:24.495 --rc geninfo_all_blocks=1 00:19:24.495 --rc geninfo_unexecuted_blocks=1 00:19:24.495 00:19:24.495 ' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:24.495 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:32.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:32.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:32.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:32.638 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:32.639 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:32.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:19:32.639 00:19:32.639 --- 10.0.0.2 ping statistics --- 00:19:32.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.639 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:19:32.639 00:19:32.639 --- 10.0.0.1 ping statistics --- 00:19:32.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.639 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1994209 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1994209 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1994209 ']' 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.639 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:32.639 [2024-11-19 18:18:33.502023] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:19:32.639 [2024-11-19 18:18:33.502092] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:32.639 [2024-11-19 18:18:33.606840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.639 [2024-11-19 18:18:33.666095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.639 [2024-11-19 18:18:33.666142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.639 [2024-11-19 18:18:33.666151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.639 [2024-11-19 18:18:33.666166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.639 [2024-11-19 18:18:33.666173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.639 [2024-11-19 18:18:33.667686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:32.639 [2024-11-19 18:18:33.667844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:32.640 [2024-11-19 18:18:33.668037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:32.640 [2024-11-19 18:18:33.668038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.901 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.901 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:32.901 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.901 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.901 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:33.163 [2024-11-19 18:18:34.379111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:33.163 Malloc0 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:33.163 [2024-11-19 18:18:34.432992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:33.163 { 00:19:33.163 "params": { 00:19:33.163 "name": "Nvme$subsystem", 00:19:33.163 "trtype": "$TEST_TRANSPORT", 00:19:33.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.163 "adrfam": "ipv4", 00:19:33.163 "trsvcid": "$NVMF_PORT", 00:19:33.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.163 "hdgst": ${hdgst:-false}, 00:19:33.163 "ddgst": ${ddgst:-false} 00:19:33.163 }, 00:19:33.163 "method": "bdev_nvme_attach_controller" 00:19:33.163 } 00:19:33.163 EOF 00:19:33.163 )") 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:33.163 18:18:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:33.163 "params": { 00:19:33.163 "name": "Nvme1", 00:19:33.163 "trtype": "tcp", 00:19:33.163 "traddr": "10.0.0.2", 00:19:33.163 "adrfam": "ipv4", 00:19:33.163 "trsvcid": "4420", 00:19:33.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.163 "hdgst": false, 00:19:33.163 "ddgst": false 00:19:33.163 }, 00:19:33.163 "method": "bdev_nvme_attach_controller" 00:19:33.163 }' 00:19:33.163 [2024-11-19 18:18:34.491154] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:19:33.163 [2024-11-19 18:18:34.491246] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1994354 ] 00:19:33.163 [2024-11-19 18:18:34.604943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:33.425 [2024-11-19 18:18:34.666244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.425 [2024-11-19 18:18:34.666420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.425 [2024-11-19 18:18:34.666421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.686 I/O targets: 00:19:33.686 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:33.686 00:19:33.686 00:19:33.686 CUnit - A unit testing framework for C - Version 2.1-3 00:19:33.686 http://cunit.sourceforge.net/ 00:19:33.686 00:19:33.686 00:19:33.686 Suite: bdevio tests on: Nvme1n1 00:19:33.686 Test: blockdev write read block ...passed 00:19:33.686 Test: blockdev write zeroes read block ...passed 00:19:33.686 Test: blockdev write zeroes read no split ...passed 00:19:33.686 Test: blockdev write zeroes read split ...passed 00:19:33.947 Test: blockdev write zeroes read split partial ...passed 00:19:33.947 Test: blockdev reset ...[2024-11-19 18:18:35.196895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:33.947 [2024-11-19 18:18:35.196994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb84800 (9): Bad file descriptor 00:19:33.947 [2024-11-19 18:18:35.213060] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:33.947 passed 00:19:33.947 Test: blockdev write read 8 blocks ...passed 00:19:33.947 Test: blockdev write read size > 128k ...passed 00:19:33.947 Test: blockdev write read invalid size ...passed 00:19:33.947 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:33.947 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:33.947 Test: blockdev write read max offset ...passed 00:19:33.947 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:33.947 Test: blockdev writev readv 8 blocks ...passed 00:19:33.947 Test: blockdev writev readv 30 x 1block ...passed 00:19:34.209 Test: blockdev writev readv block ...passed 00:19:34.209 Test: blockdev writev readv size > 128k ...passed 00:19:34.209 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:34.209 Test: blockdev comparev and writev ...[2024-11-19 18:18:35.438568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:34.209 [2024-11-19 18:18:35.438620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.438636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:34.209 [2024-11-19 18:18:35.438645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.439227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:34.209 [2024-11-19 18:18:35.439243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.439258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:34.209 [2024-11-19 18:18:35.439265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.439800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:34.209 [2024-11-19 18:18:35.439814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.439828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:34.209 [2024-11-19 18:18:35.439836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.440405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:34.209 [2024-11-19 18:18:35.440420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.440434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:34.209 [2024-11-19 18:18:35.440442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:34.209 passed 00:19:34.209 Test: blockdev nvme passthru rw ...passed 00:19:34.209 Test: blockdev nvme passthru vendor specific ...[2024-11-19 18:18:35.525006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:34.209 [2024-11-19 18:18:35.525023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.525394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:34.209 [2024-11-19 18:18:35.525408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.525765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:34.209 [2024-11-19 18:18:35.525777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:34.209 [2024-11-19 18:18:35.526139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:34.209 [2024-11-19 18:18:35.526153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:34.209 passed 00:19:34.209 Test: blockdev nvme admin passthru ...passed 00:19:34.209 Test: blockdev copy ...passed 00:19:34.209 00:19:34.209 Run Summary: Type Total Ran Passed Failed Inactive 00:19:34.209 suites 1 1 n/a 0 0 00:19:34.209 tests 23 23 23 0 0 00:19:34.209 asserts 152 152 152 0 n/a 00:19:34.209 00:19:34.209 Elapsed time = 1.141 seconds 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:34.471 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:34.471 rmmod nvme_tcp 00:19:34.471 rmmod nvme_fabrics 00:19:34.732 rmmod nvme_keyring 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1994209 ']' 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1994209 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1994209 ']' 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1994209 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.733 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1994209 00:19:34.733 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:34.733 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:34.733 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1994209' 00:19:34.733 killing process with pid 1994209 00:19:34.733 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1994209 00:19:34.733 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1994209 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.994 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.541 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:37.541 00:19:37.541 real 0m12.762s 00:19:37.541 user 0m15.417s 00:19:37.541 sys 0m6.696s 00:19:37.541 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.541 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:37.541 ************************************ 00:19:37.541 END TEST nvmf_bdevio_no_huge 00:19:37.541 ************************************ 00:19:37.541 18:18:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:37.542 ************************************ 00:19:37.542 START TEST nvmf_tls 00:19:37.542 ************************************ 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:37.542 * Looking for test storage... 00:19:37.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.542 --rc genhtml_branch_coverage=1 00:19:37.542 --rc genhtml_function_coverage=1 00:19:37.542 --rc genhtml_legend=1 00:19:37.542 --rc geninfo_all_blocks=1 00:19:37.542 --rc geninfo_unexecuted_blocks=1 00:19:37.542 00:19:37.542 ' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.542 --rc genhtml_branch_coverage=1 00:19:37.542 --rc genhtml_function_coverage=1 00:19:37.542 --rc genhtml_legend=1 00:19:37.542 --rc geninfo_all_blocks=1 00:19:37.542 --rc geninfo_unexecuted_blocks=1 00:19:37.542 00:19:37.542 ' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.542 --rc genhtml_branch_coverage=1 00:19:37.542 --rc genhtml_function_coverage=1 00:19:37.542 --rc genhtml_legend=1 00:19:37.542 --rc geninfo_all_blocks=1 00:19:37.542 --rc geninfo_unexecuted_blocks=1 00:19:37.542 00:19:37.542 ' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.542 --rc genhtml_branch_coverage=1 00:19:37.542 --rc genhtml_function_coverage=1 00:19:37.542 --rc genhtml_legend=1 00:19:37.542 --rc geninfo_all_blocks=1 00:19:37.542 --rc geninfo_unexecuted_blocks=1 00:19:37.542 00:19:37.542 ' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.542 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:37.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:37.543 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:45.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:45.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:45.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:45.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.687 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.687 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.687 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.687 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:45.687 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.687 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.687 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:45.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:19:45.688 00:19:45.688 --- 10.0.0.2 ping statistics --- 00:19:45.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.688 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:19:45.688 00:19:45.688 --- 10.0.0.1 ping statistics --- 00:19:45.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.688 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1998911 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1998911 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1998911 ']' 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.688 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.688 [2024-11-19 18:18:46.331110] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:19:45.688 [2024-11-19 18:18:46.331181] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.688 [2024-11-19 18:18:46.432892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.688 [2024-11-19 18:18:46.482986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.688 [2024-11-19 18:18:46.483041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.688 [2024-11-19 18:18:46.483050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.688 [2024-11-19 18:18:46.483057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.688 [2024-11-19 18:18:46.483063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.688 [2024-11-19 18:18:46.483841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.949 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.949 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.949 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:45.949 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.949 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.949 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.949 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:45.950 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:45.950 true 00:19:45.950 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:45.950 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:46.211 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:46.211 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:46.211 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:46.472 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:46.472 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:46.733 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:46.733 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:46.733 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:46.733 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:46.733 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:46.994 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:46.994 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:46.994 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:46.994 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:47.256 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:47.256 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:47.256 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:47.256 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:47.256 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:47.517 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:47.517 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:47.517 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:47.778 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:47.778 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:48.039 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:48.040 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:48.040 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.kBWysoGs96 00:19:48.040 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:48.040 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Jg3Fvl0aV6 00:19:48.040 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:48.040 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:48.040 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kBWysoGs96 00:19:48.040 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Jg3Fvl0aV6 00:19:48.040 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:48.301 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:48.563 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.kBWysoGs96 00:19:48.563 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kBWysoGs96 00:19:48.563 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:48.563 [2024-11-19 18:18:49.984509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.563 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:48.823 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.084 [2024-11-19 18:18:50.341441] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.084 [2024-11-19 18:18:50.341652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.084 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:49.084 malloc0 00:19:49.084 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.345 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kBWysoGs96 00:19:49.605 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.605 18:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kBWysoGs96 00:20:01.840 Initializing NVMe Controllers 00:20:01.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:01.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:01.840 Initialization complete. Launching workers. 00:20:01.840 ======================================================== 00:20:01.840 Latency(us) 00:20:01.840 Device Information : IOPS MiB/s Average min max 00:20:01.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18480.26 72.19 3463.37 1058.20 6309.69 00:20:01.840 ======================================================== 00:20:01.840 Total : 18480.26 72.19 3463.37 1058.20 6309.69 00:20:01.840 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kBWysoGs96 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kBWysoGs96 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2001819 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2001819 /var/tmp/bdevperf.sock 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2001819 ']' 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.840 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.840 [2024-11-19 18:19:01.206861] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:01.840 [2024-11-19 18:19:01.206921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001819 ] 00:20:01.840 [2024-11-19 18:19:01.293983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.840 [2024-11-19 18:19:01.329391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.840 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.840 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.840 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kBWysoGs96 00:20:01.840 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.840 [2024-11-19 18:19:02.352944] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.840 TLSTESTn1 00:20:01.840 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:01.840 Running I/O for 10 seconds... 00:20:03.481 5661.00 IOPS, 22.11 MiB/s [2024-11-19T17:19:05.569Z] 5748.00 IOPS, 22.45 MiB/s [2024-11-19T17:19:06.631Z] 5546.00 IOPS, 21.66 MiB/s [2024-11-19T17:19:07.572Z] 5644.00 IOPS, 22.05 MiB/s [2024-11-19T17:19:08.953Z] 5810.80 IOPS, 22.70 MiB/s [2024-11-19T17:19:09.891Z] 5886.50 IOPS, 22.99 MiB/s [2024-11-19T17:19:10.829Z] 5898.29 IOPS, 23.04 MiB/s [2024-11-19T17:19:11.767Z] 5917.38 IOPS, 23.11 MiB/s [2024-11-19T17:19:12.706Z] 5935.89 IOPS, 23.19 MiB/s [2024-11-19T17:19:12.706Z] 5941.50 IOPS, 23.21 MiB/s 00:20:11.235 Latency(us) 00:20:11.235 [2024-11-19T17:19:12.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.235 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:11.235 Verification LBA range: start 0x0 length 0x2000 00:20:11.235 TLSTESTn1 : 10.02 5943.60 23.22 0.00 0.00 21497.16 5515.95 52865.71 00:20:11.235 [2024-11-19T17:19:12.706Z] =================================================================================================================== 00:20:11.235 [2024-11-19T17:19:12.706Z] Total : 5943.60 23.22 0.00 0.00 21497.16 5515.95 52865.71 00:20:11.235 { 00:20:11.235 "results": [ 00:20:11.235 { 00:20:11.235 "job": "TLSTESTn1", 00:20:11.235 "core_mask": "0x4", 00:20:11.235 "workload": "verify", 00:20:11.235 "status": "finished", 00:20:11.235 "verify_range": { 00:20:11.235 "start": 0, 00:20:11.235 "length": 8192 00:20:11.235 }, 00:20:11.235 "queue_depth": 128, 00:20:11.235 "io_size": 4096, 00:20:11.235 "runtime": 10.017842, 00:20:11.235 "iops": 5943.595437021267, 00:20:11.235 "mibps": 23.217169675864323, 00:20:11.235 "io_failed": 0, 00:20:11.235 "io_timeout": 0, 00:20:11.235 "avg_latency_us": 21497.16209734305, 00:20:11.235 "min_latency_us": 5515.946666666667, 00:20:11.235 "max_latency_us": 52865.706666666665 00:20:11.235 } 00:20:11.235 ], 00:20:11.235 "core_count": 1 00:20:11.235 } 00:20:11.235 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:11.235 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2001819 00:20:11.235 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2001819 ']' 00:20:11.235 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2001819 00:20:11.235 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:11.235 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.235 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2001819 00:20:11.235 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:11.236 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:11.236 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2001819' 00:20:11.236 killing process with pid 2001819 00:20:11.236 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2001819 00:20:11.236 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.236 00:20:11.236 Latency(us) 00:20:11.236 [2024-11-19T17:19:12.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.236 [2024-11-19T17:19:12.707Z] =================================================================================================================== 00:20:11.236 [2024-11-19T17:19:12.707Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:11.236 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2001819 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Jg3Fvl0aV6 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Jg3Fvl0aV6 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Jg3Fvl0aV6 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Jg3Fvl0aV6 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2004007 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2004007 /var/tmp/bdevperf.sock 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2004007 ']' 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.496 [2024-11-19 18:19:12.796696] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:11.496 [2024-11-19 18:19:12.796745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004007 ] 00:20:11.496 [2024-11-19 18:19:12.845736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.496 [2024-11-19 18:19:12.873693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.496 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.497 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:11.497 18:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Jg3Fvl0aV6 00:20:11.757 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:12.017 [2024-11-19 18:19:13.298673] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.017 [2024-11-19 18:19:13.310191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:12.017 [2024-11-19 18:19:13.310624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7bb0 (107): Transport endpoint is not connected 00:20:12.017 [2024-11-19 18:19:13.311619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7bb0 (9): Bad file descriptor 00:20:12.017 [2024-11-19 18:19:13.312621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:12.017 [2024-11-19 18:19:13.312630] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:12.017 [2024-11-19 18:19:13.312636] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:12.017 [2024-11-19 18:19:13.312644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:12.017 request: 00:20:12.017 { 00:20:12.017 "name": "TLSTEST", 00:20:12.017 "trtype": "tcp", 00:20:12.017 "traddr": "10.0.0.2", 00:20:12.017 "adrfam": "ipv4", 00:20:12.018 "trsvcid": "4420", 00:20:12.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.018 "prchk_reftag": false, 00:20:12.018 "prchk_guard": false, 00:20:12.018 "hdgst": false, 00:20:12.018 "ddgst": false, 00:20:12.018 "psk": "key0", 00:20:12.018 "allow_unrecognized_csi": false, 00:20:12.018 "method": "bdev_nvme_attach_controller", 00:20:12.018 "req_id": 1 00:20:12.018 } 00:20:12.018 Got JSON-RPC error response 00:20:12.018 response: 00:20:12.018 { 00:20:12.018 "code": -5, 00:20:12.018 "message": "Input/output error" 00:20:12.018 } 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2004007 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2004007 ']' 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2004007 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2004007 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2004007' 00:20:12.018 killing process with pid 2004007 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2004007 00:20:12.018 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.018 00:20:12.018 Latency(us) 00:20:12.018 [2024-11-19T17:19:13.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.018 [2024-11-19T17:19:13.489Z] =================================================================================================================== 00:20:12.018 [2024-11-19T17:19:13.489Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:12.018 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2004007 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kBWysoGs96 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kBWysoGs96 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kBWysoGs96 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:12.278 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kBWysoGs96 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2004265 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2004265 /var/tmp/bdevperf.sock 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2004265 ']' 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.279 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.279 [2024-11-19 18:19:13.555581] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:12.279 [2024-11-19 18:19:13.555638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004265 ] 00:20:12.279 [2024-11-19 18:19:13.638976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.279 [2024-11-19 18:19:13.667040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.219 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.219 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:13.219 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kBWysoGs96 00:20:13.219 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:13.219 [2024-11-19 18:19:14.673492] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.219 [2024-11-19 18:19:14.683422] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:13.219 [2024-11-19 18:19:14.683443] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:13.219 [2024-11-19 18:19:14.683463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:13.219 [2024-11-19 18:19:14.683792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f2bb0 (107): Transport endpoint is not connected 00:20:13.219 [2024-11-19 18:19:14.684789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f2bb0 (9): Bad file descriptor 00:20:13.219 [2024-11-19 18:19:14.685790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:13.219 [2024-11-19 18:19:14.685800] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:13.219 [2024-11-19 18:19:14.685806] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:13.219 [2024-11-19 18:19:14.685819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:13.480 request: 00:20:13.480 { 00:20:13.480 "name": "TLSTEST", 00:20:13.480 "trtype": "tcp", 00:20:13.480 "traddr": "10.0.0.2", 00:20:13.480 "adrfam": "ipv4", 00:20:13.480 "trsvcid": "4420", 00:20:13.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.480 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:13.480 "prchk_reftag": false, 00:20:13.480 "prchk_guard": false, 00:20:13.480 "hdgst": false, 00:20:13.480 "ddgst": false, 00:20:13.480 "psk": "key0", 00:20:13.480 "allow_unrecognized_csi": false, 00:20:13.480 "method": "bdev_nvme_attach_controller", 00:20:13.480 "req_id": 1 00:20:13.480 } 00:20:13.480 Got JSON-RPC error response 00:20:13.480 response: 00:20:13.480 { 00:20:13.480 "code": -5, 00:20:13.480 "message": "Input/output error" 00:20:13.480 } 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2004265 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2004265 ']' 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2004265 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2004265 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2004265' 00:20:13.480 killing process with pid 2004265 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2004265 00:20:13.480 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.480 00:20:13.480 Latency(us) 00:20:13.480 [2024-11-19T17:19:14.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.480 [2024-11-19T17:19:14.951Z] =================================================================================================================== 00:20:13.480 [2024-11-19T17:19:14.951Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2004265 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.480 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kBWysoGs96 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kBWysoGs96 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kBWysoGs96 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kBWysoGs96 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2004443 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2004443 /var/tmp/bdevperf.sock 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2004443 ']' 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.481 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.481 [2024-11-19 18:19:14.928940] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:13.481 [2024-11-19 18:19:14.928995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004443 ] 00:20:13.742 [2024-11-19 18:19:15.012204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.742 [2024-11-19 18:19:15.040736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.312 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.312 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.312 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kBWysoGs96 00:20:14.572 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:14.830 [2024-11-19 18:19:16.063292] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.831 [2024-11-19 18:19:16.067828] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:14.831 [2024-11-19 18:19:16.067849] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:14.831 [2024-11-19 18:19:16.067868] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:14.831 [2024-11-19 18:19:16.068520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143ebb0 (107): Transport endpoint is not connected 00:20:14.831 [2024-11-19 18:19:16.069515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143ebb0 (9): Bad file descriptor 00:20:14.831 [2024-11-19 18:19:16.070516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:14.831 [2024-11-19 18:19:16.070525] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:14.831 [2024-11-19 18:19:16.070534] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:14.831 [2024-11-19 18:19:16.070542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:14.831 request: 00:20:14.831 { 00:20:14.831 "name": "TLSTEST", 00:20:14.831 "trtype": "tcp", 00:20:14.831 "traddr": "10.0.0.2", 00:20:14.831 "adrfam": "ipv4", 00:20:14.831 "trsvcid": "4420", 00:20:14.831 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:14.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.831 "prchk_reftag": false, 00:20:14.831 "prchk_guard": false, 00:20:14.831 "hdgst": false, 00:20:14.831 "ddgst": false, 00:20:14.831 "psk": "key0", 00:20:14.831 "allow_unrecognized_csi": false, 00:20:14.831 "method": "bdev_nvme_attach_controller", 00:20:14.831 "req_id": 1 00:20:14.831 } 00:20:14.831 Got JSON-RPC error response 00:20:14.831 response: 00:20:14.831 { 00:20:14.831 "code": -5, 00:20:14.831 "message": "Input/output error" 00:20:14.831 } 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2004443 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2004443 ']' 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2004443 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2004443 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2004443' 00:20:14.831 killing process with pid 2004443 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2004443 00:20:14.831 Received shutdown signal, test time was about 10.000000 seconds 00:20:14.831 00:20:14.831 Latency(us) 00:20:14.831 [2024-11-19T17:19:16.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.831 [2024-11-19T17:19:16.302Z] =================================================================================================================== 00:20:14.831 [2024-11-19T17:19:16.302Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2004443 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2004704 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2004704 /var/tmp/bdevperf.sock 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2004704 ']' 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.831 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.091 [2024-11-19 18:19:16.313676] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:15.091 [2024-11-19 18:19:16.313728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004704 ] 00:20:15.091 [2024-11-19 18:19:16.398921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.091 [2024-11-19 18:19:16.427441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.661 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.661 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:15.661 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:15.922 [2024-11-19 18:19:17.265396] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:15.922 [2024-11-19 18:19:17.265422] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:15.922 request: 00:20:15.922 { 00:20:15.922 "name": "key0", 00:20:15.922 "path": "", 00:20:15.922 "method": "keyring_file_add_key", 00:20:15.922 "req_id": 1 00:20:15.922 } 00:20:15.922 Got JSON-RPC error response 00:20:15.922 response: 00:20:15.922 { 00:20:15.922 "code": -1, 00:20:15.922 "message": "Operation not permitted" 00:20:15.922 } 00:20:15.922 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:16.182 [2024-11-19 18:19:17.449939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.182 [2024-11-19 18:19:17.449959] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:16.182 request: 00:20:16.182 { 00:20:16.182 "name": "TLSTEST", 00:20:16.182 "trtype": "tcp", 00:20:16.182 "traddr": "10.0.0.2", 00:20:16.182 "adrfam": "ipv4", 00:20:16.182 "trsvcid": "4420", 00:20:16.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.182 "prchk_reftag": false, 00:20:16.182 "prchk_guard": false, 00:20:16.182 "hdgst": false, 00:20:16.182 "ddgst": false, 00:20:16.182 "psk": "key0", 00:20:16.182 "allow_unrecognized_csi": false, 00:20:16.182 "method": "bdev_nvme_attach_controller", 00:20:16.182 "req_id": 1 00:20:16.182 } 00:20:16.182 Got JSON-RPC error response 00:20:16.182 response: 00:20:16.182 { 00:20:16.182 "code": -126, 00:20:16.182 "message": "Required key not available" 00:20:16.182 } 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2004704 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2004704 ']' 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2004704 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2004704 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2004704' 00:20:16.182 killing process with pid 2004704 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2004704 00:20:16.182 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.182 00:20:16.182 Latency(us) 00:20:16.182 [2024-11-19T17:19:17.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.182 [2024-11-19T17:19:17.653Z] =================================================================================================================== 00:20:16.182 [2024-11-19T17:19:17.653Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2004704 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:16.182 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:16.183 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.183 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.183 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.183 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1998911 00:20:16.183 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1998911 ']' 00:20:16.183 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1998911 00:20:16.183 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:16.183 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:16.183 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1998911 00:20:16.443 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:16.443 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:16.443 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1998911' 00:20:16.443 killing process with pid 1998911 00:20:16.443 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1998911 00:20:16.443 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1998911 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.UPgGCcDP1P 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.UPgGCcDP1P 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2005056 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2005056 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2005056 ']' 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.444 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.704 [2024-11-19 18:19:17.916886] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:16.704 [2024-11-19 18:19:17.916945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.704 [2024-11-19 18:19:18.006683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.704 [2024-11-19 18:19:18.036876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.704 [2024-11-19 18:19:18.036903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.704 [2024-11-19 18:19:18.036909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.704 [2024-11-19 18:19:18.036914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.704 [2024-11-19 18:19:18.036918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.704 [2024-11-19 18:19:18.037369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.275 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.275 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:17.275 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.275 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.275 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.535 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.535 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.UPgGCcDP1P 00:20:17.535 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UPgGCcDP1P 00:20:17.535 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.535 [2024-11-19 18:19:18.896999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.535 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.795 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.795 [2024-11-19 18:19:19.233833] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.795 [2024-11-19 18:19:19.234032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.795 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.056 malloc0 00:20:18.056 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.317 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UPgGCcDP1P 00:20:18.317 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:18.577 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UPgGCcDP1P 00:20:18.577 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.577 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.577 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.577 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UPgGCcDP1P 00:20:18.577 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.577 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2005437 00:20:18.577 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.577 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2005437 /var/tmp/bdevperf.sock 00:20:18.578 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.578 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2005437 ']' 00:20:18.578 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.578 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.578 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.578 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.578 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.578 [2024-11-19 18:19:19.966208] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:18.578 [2024-11-19 18:19:19.966262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005437 ] 00:20:18.838 [2024-11-19 18:19:20.049331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.838 [2024-11-19 18:19:20.079650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.407 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.407 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:19.407 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UPgGCcDP1P 00:20:19.667 18:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:19.667 [2024-11-19 18:19:21.079550] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.926 TLSTESTn1 00:20:19.926 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:19.926 Running I/O for 10 seconds... 00:20:21.801 5341.00 IOPS, 20.86 MiB/s [2024-11-19T17:19:24.651Z] 5594.50 IOPS, 21.85 MiB/s [2024-11-19T17:19:25.588Z] 5767.33 IOPS, 22.53 MiB/s [2024-11-19T17:19:26.526Z] 5853.00 IOPS, 22.86 MiB/s [2024-11-19T17:19:27.464Z] 5757.40 IOPS, 22.49 MiB/s [2024-11-19T17:19:28.401Z] 5801.67 IOPS, 22.66 MiB/s [2024-11-19T17:19:29.339Z] 5869.86 IOPS, 22.93 MiB/s [2024-11-19T17:19:30.719Z] 5673.25 IOPS, 22.16 MiB/s [2024-11-19T17:19:31.288Z] 5734.56 IOPS, 22.40 MiB/s [2024-11-19T17:19:31.548Z] 5780.10 IOPS, 22.58 MiB/s 00:20:30.077 Latency(us) 00:20:30.077 [2024-11-19T17:19:31.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.077 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:30.077 Verification LBA range: start 0x0 length 0x2000 00:20:30.077 TLSTESTn1 : 10.02 5783.33 22.59 0.00 0.00 22099.22 5734.40 25012.91 00:20:30.077 [2024-11-19T17:19:31.548Z] =================================================================================================================== 00:20:30.077 [2024-11-19T17:19:31.548Z] Total : 5783.33 22.59 0.00 0.00 22099.22 5734.40 25012.91 00:20:30.077 { 00:20:30.077 "results": [ 00:20:30.077 { 00:20:30.077 "job": "TLSTESTn1", 00:20:30.077 "core_mask": "0x4", 00:20:30.077 "workload": "verify", 00:20:30.077 "status": "finished", 00:20:30.077 "verify_range": { 00:20:30.077 "start": 0, 00:20:30.077 "length": 8192 00:20:30.077 }, 00:20:30.077 "queue_depth": 128, 00:20:30.077 "io_size": 4096, 00:20:30.077 "runtime": 10.016205, 00:20:30.077 "iops": 5783.328116786747, 00:20:30.077 "mibps": 22.59112545619823, 00:20:30.077 "io_failed": 0, 00:20:30.077 "io_timeout": 0, 00:20:30.077 "avg_latency_us": 22099.221411776893, 00:20:30.077 "min_latency_us": 5734.4, 00:20:30.077 "max_latency_us": 25012.906666666666 00:20:30.077 } 00:20:30.077 ], 00:20:30.077 "core_count": 1 00:20:30.077 } 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2005437 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2005437 ']' 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2005437 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2005437 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2005437' 00:20:30.077 killing process with pid 2005437 00:20:30.077 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2005437 00:20:30.077 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.077 00:20:30.078 Latency(us) 00:20:30.078 [2024-11-19T17:19:31.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.078 [2024-11-19T17:19:31.549Z] =================================================================================================================== 00:20:30.078 [2024-11-19T17:19:31.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2005437 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.UPgGCcDP1P 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UPgGCcDP1P 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UPgGCcDP1P 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UPgGCcDP1P 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UPgGCcDP1P 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2007763 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2007763 /var/tmp/bdevperf.sock 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2007763 ']' 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.078 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.338 [2024-11-19 18:19:31.551017] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:30.338 [2024-11-19 18:19:31.551071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007763 ] 00:20:30.338 [2024-11-19 18:19:31.636060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.338 [2024-11-19 18:19:31.665285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.908 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.908 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:30.908 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UPgGCcDP1P 00:20:31.169 [2024-11-19 18:19:32.507516] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UPgGCcDP1P': 0100666 00:20:31.169 [2024-11-19 18:19:32.507539] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:31.169 request: 00:20:31.169 { 00:20:31.169 "name": "key0", 00:20:31.169 "path": "/tmp/tmp.UPgGCcDP1P", 00:20:31.169 "method": "keyring_file_add_key", 00:20:31.169 "req_id": 1 00:20:31.169 } 00:20:31.169 Got JSON-RPC error response 00:20:31.169 response: 00:20:31.169 { 00:20:31.169 "code": -1, 00:20:31.169 "message": "Operation not permitted" 00:20:31.169 } 00:20:31.169 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:31.430 [2024-11-19 18:19:32.692052] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.430 [2024-11-19 18:19:32.692075] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:31.430 request: 00:20:31.430 { 00:20:31.430 "name": "TLSTEST", 00:20:31.430 "trtype": "tcp", 00:20:31.430 "traddr": "10.0.0.2", 00:20:31.430 "adrfam": "ipv4", 00:20:31.430 "trsvcid": "4420", 00:20:31.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.430 "prchk_reftag": false, 00:20:31.430 "prchk_guard": false, 00:20:31.430 "hdgst": false, 00:20:31.430 "ddgst": false, 00:20:31.430 "psk": "key0", 00:20:31.430 "allow_unrecognized_csi": false, 00:20:31.430 "method": "bdev_nvme_attach_controller", 00:20:31.430 "req_id": 1 00:20:31.430 } 00:20:31.430 Got JSON-RPC error response 00:20:31.430 response: 00:20:31.430 { 00:20:31.430 "code": -126, 00:20:31.430 "message": "Required key not available" 00:20:31.430 } 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2007763 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2007763 ']' 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2007763 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2007763 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2007763' 00:20:31.430 killing process with pid 2007763 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2007763 00:20:31.430 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.430 00:20:31.430 Latency(us) 00:20:31.430 [2024-11-19T17:19:32.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.430 [2024-11-19T17:19:32.901Z] =================================================================================================================== 00:20:31.430 [2024-11-19T17:19:32.901Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2007763 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2005056 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2005056 ']' 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2005056 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.430 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2005056 00:20:31.691 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:31.691 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:31.691 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2005056' 00:20:31.691 killing process with pid 2005056 00:20:31.691 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2005056 00:20:31.691 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2005056 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2008109 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2008109 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2008109 ']' 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.691 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.692 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.692 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.692 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.692 [2024-11-19 18:19:33.116706] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:31.692 [2024-11-19 18:19:33.116762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.953 [2024-11-19 18:19:33.204793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.953 [2024-11-19 18:19:33.233021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.953 [2024-11-19 18:19:33.233052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.953 [2024-11-19 18:19:33.233057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.953 [2024-11-19 18:19:33.233062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.953 [2024-11-19 18:19:33.233066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.953 [2024-11-19 18:19:33.233522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.UPgGCcDP1P 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UPgGCcDP1P 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:32.524 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.UPgGCcDP1P 00:20:32.525 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UPgGCcDP1P 00:20:32.525 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.785 [2024-11-19 18:19:34.124683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.785 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:33.046 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:33.046 [2024-11-19 18:19:34.461505] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.046 [2024-11-19 18:19:34.461714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.046 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:33.306 malloc0 00:20:33.306 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:33.566 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UPgGCcDP1P 00:20:33.566 [2024-11-19 18:19:34.936551] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UPgGCcDP1P': 0100666 00:20:33.566 [2024-11-19 18:19:34.936574] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:33.566 request: 00:20:33.566 { 00:20:33.566 "name": "key0", 00:20:33.566 "path": "/tmp/tmp.UPgGCcDP1P", 00:20:33.566 "method": "keyring_file_add_key", 00:20:33.566 "req_id": 1 00:20:33.566 } 00:20:33.566 Got JSON-RPC error response 00:20:33.566 response: 00:20:33.566 { 00:20:33.566 "code": -1, 00:20:33.566 "message": "Operation not permitted" 00:20:33.566 } 00:20:33.566 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:33.826 [2024-11-19 18:19:35.104987] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:33.826 [2024-11-19 18:19:35.105014] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:33.826 request: 00:20:33.826 { 00:20:33.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.826 "host": "nqn.2016-06.io.spdk:host1", 00:20:33.826 "psk": "key0", 00:20:33.826 "method": "nvmf_subsystem_add_host", 00:20:33.826 "req_id": 1 00:20:33.826 } 00:20:33.826 Got JSON-RPC error response 00:20:33.826 response: 00:20:33.826 { 00:20:33.826 "code": -32603, 00:20:33.826 "message": "Internal error" 00:20:33.826 } 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2008109 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2008109 ']' 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2008109 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2008109 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2008109' 00:20:33.826 killing process with pid 2008109 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2008109 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2008109 00:20:33.826 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.UPgGCcDP1P 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2008486 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2008486 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2008486 ']' 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.088 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.088 [2024-11-19 18:19:35.356483] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:34.088 [2024-11-19 18:19:35.356539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.088 [2024-11-19 18:19:35.445737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.088 [2024-11-19 18:19:35.476215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.088 [2024-11-19 18:19:35.476241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.088 [2024-11-19 18:19:35.476249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.088 [2024-11-19 18:19:35.476254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.088 [2024-11-19 18:19:35.476258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.088 [2024-11-19 18:19:35.476693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.UPgGCcDP1P 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UPgGCcDP1P 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:35.031 [2024-11-19 18:19:36.335970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.031 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:35.291 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:35.291 [2024-11-19 18:19:36.656757] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.291 [2024-11-19 18:19:36.656954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.291 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:35.551 malloc0 00:20:35.551 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:35.551 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UPgGCcDP1P 00:20:35.812 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2008868 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2008868 /var/tmp/bdevperf.sock 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2008868 ']' 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.073 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.073 [2024-11-19 18:19:37.365632] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:36.073 [2024-11-19 18:19:37.365687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008868 ] 00:20:36.073 [2024-11-19 18:19:37.446342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.073 [2024-11-19 18:19:37.475836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.013 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.013 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:37.013 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UPgGCcDP1P 00:20:37.013 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:37.273 [2024-11-19 18:19:38.490865] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.273 TLSTESTn1 00:20:37.273 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:37.534 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:37.534 "subsystems": [ 00:20:37.534 { 00:20:37.534 "subsystem": "keyring", 00:20:37.534 "config": [ 00:20:37.534 { 00:20:37.534 "method": "keyring_file_add_key", 00:20:37.534 "params": { 00:20:37.535 "name": "key0", 00:20:37.535 "path": "/tmp/tmp.UPgGCcDP1P" 00:20:37.535 } 00:20:37.535 } 00:20:37.535 ] 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "subsystem": "iobuf", 00:20:37.535 "config": [ 00:20:37.535 { 00:20:37.535 "method": "iobuf_set_options", 00:20:37.535 "params": { 00:20:37.535 "small_pool_count": 8192, 00:20:37.535 "large_pool_count": 1024, 00:20:37.535 "small_bufsize": 8192, 00:20:37.535 "large_bufsize": 135168, 00:20:37.535 "enable_numa": false 00:20:37.535 } 00:20:37.535 } 00:20:37.535 ] 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "subsystem": "sock", 00:20:37.535 "config": [ 00:20:37.535 { 00:20:37.535 "method": "sock_set_default_impl", 00:20:37.535 "params": { 00:20:37.535 "impl_name": "posix" 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "sock_impl_set_options", 00:20:37.535 "params": { 00:20:37.535 "impl_name": "ssl", 00:20:37.535 "recv_buf_size": 4096, 00:20:37.535 "send_buf_size": 4096, 00:20:37.535 "enable_recv_pipe": true, 00:20:37.535 "enable_quickack": false, 00:20:37.535 "enable_placement_id": 0, 00:20:37.535 "enable_zerocopy_send_server": true, 00:20:37.535 "enable_zerocopy_send_client": false, 00:20:37.535 "zerocopy_threshold": 0, 00:20:37.535 "tls_version": 0, 00:20:37.535 "enable_ktls": false 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "sock_impl_set_options", 00:20:37.535 "params": { 00:20:37.535 "impl_name": "posix", 00:20:37.535 "recv_buf_size": 2097152, 00:20:37.535 "send_buf_size": 2097152, 00:20:37.535 "enable_recv_pipe": true, 00:20:37.535 "enable_quickack": false, 00:20:37.535 "enable_placement_id": 0, 00:20:37.535 "enable_zerocopy_send_server": true, 00:20:37.535 "enable_zerocopy_send_client": false, 00:20:37.535 "zerocopy_threshold": 0, 00:20:37.535 "tls_version": 0, 00:20:37.535 "enable_ktls": false 00:20:37.535 } 00:20:37.535 } 00:20:37.535 ] 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "subsystem": "vmd", 00:20:37.535 "config": [] 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "subsystem": "accel", 00:20:37.535 "config": [ 00:20:37.535 { 00:20:37.535 "method": "accel_set_options", 00:20:37.535 "params": { 00:20:37.535 "small_cache_size": 128, 00:20:37.535 "large_cache_size": 16, 00:20:37.535 "task_count": 2048, 00:20:37.535 "sequence_count": 2048, 00:20:37.535 "buf_count": 2048 00:20:37.535 } 00:20:37.535 } 00:20:37.535 ] 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "subsystem": "bdev", 00:20:37.535 "config": [ 00:20:37.535 { 00:20:37.535 "method": "bdev_set_options", 00:20:37.535 "params": { 00:20:37.535 "bdev_io_pool_size": 65535, 00:20:37.535 "bdev_io_cache_size": 256, 00:20:37.535 "bdev_auto_examine": true, 00:20:37.535 "iobuf_small_cache_size": 128, 00:20:37.535 "iobuf_large_cache_size": 16 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "bdev_raid_set_options", 00:20:37.535 "params": { 00:20:37.535 "process_window_size_kb": 1024, 00:20:37.535 "process_max_bandwidth_mb_sec": 0 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "bdev_iscsi_set_options", 00:20:37.535 "params": { 00:20:37.535 "timeout_sec": 30 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "bdev_nvme_set_options", 00:20:37.535 "params": { 00:20:37.535 "action_on_timeout": "none", 00:20:37.535 "timeout_us": 0, 00:20:37.535 "timeout_admin_us": 0, 00:20:37.535 "keep_alive_timeout_ms": 10000, 00:20:37.535 "arbitration_burst": 0, 00:20:37.535 "low_priority_weight": 0, 00:20:37.535 "medium_priority_weight": 0, 00:20:37.535 "high_priority_weight": 0, 00:20:37.535 "nvme_adminq_poll_period_us": 10000, 00:20:37.535 "nvme_ioq_poll_period_us": 0, 00:20:37.535 "io_queue_requests": 0, 00:20:37.535 "delay_cmd_submit": true, 00:20:37.535 "transport_retry_count": 4, 00:20:37.535 "bdev_retry_count": 3, 00:20:37.535 "transport_ack_timeout": 0, 00:20:37.535 "ctrlr_loss_timeout_sec": 0, 00:20:37.535 "reconnect_delay_sec": 0, 00:20:37.535 "fast_io_fail_timeout_sec": 0, 00:20:37.535 "disable_auto_failback": false, 00:20:37.535 "generate_uuids": false, 00:20:37.535 "transport_tos": 0, 00:20:37.535 "nvme_error_stat": false, 00:20:37.535 "rdma_srq_size": 0, 00:20:37.535 "io_path_stat": false, 00:20:37.535 "allow_accel_sequence": false, 00:20:37.535 "rdma_max_cq_size": 0, 00:20:37.535 "rdma_cm_event_timeout_ms": 0, 00:20:37.535 "dhchap_digests": [ 00:20:37.535 "sha256", 00:20:37.535 "sha384", 00:20:37.535 "sha512" 00:20:37.535 ], 00:20:37.535 "dhchap_dhgroups": [ 00:20:37.535 "null", 00:20:37.535 "ffdhe2048", 00:20:37.535 "ffdhe3072", 00:20:37.535 "ffdhe4096", 00:20:37.535 "ffdhe6144", 00:20:37.535 "ffdhe8192" 00:20:37.535 ] 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "bdev_nvme_set_hotplug", 00:20:37.535 "params": { 00:20:37.535 "period_us": 100000, 00:20:37.535 "enable": false 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "bdev_malloc_create", 00:20:37.535 "params": { 00:20:37.535 "name": "malloc0", 00:20:37.535 "num_blocks": 8192, 00:20:37.535 "block_size": 4096, 00:20:37.535 "physical_block_size": 4096, 00:20:37.535 "uuid": "9945ce03-6884-455b-8e25-35c42bc6f674", 00:20:37.535 "optimal_io_boundary": 0, 00:20:37.535 "md_size": 0, 00:20:37.535 "dif_type": 0, 00:20:37.535 "dif_is_head_of_md": false, 00:20:37.535 "dif_pi_format": 0 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "bdev_wait_for_examine" 00:20:37.535 } 00:20:37.535 ] 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "subsystem": "nbd", 00:20:37.535 "config": [] 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "subsystem": "scheduler", 00:20:37.535 "config": [ 00:20:37.535 { 00:20:37.535 "method": "framework_set_scheduler", 00:20:37.535 "params": { 00:20:37.535 "name": "static" 00:20:37.535 } 00:20:37.535 } 00:20:37.535 ] 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "subsystem": "nvmf", 00:20:37.535 "config": [ 00:20:37.535 { 00:20:37.535 "method": "nvmf_set_config", 00:20:37.535 "params": { 00:20:37.535 "discovery_filter": "match_any", 00:20:37.535 "admin_cmd_passthru": { 00:20:37.535 "identify_ctrlr": false 00:20:37.535 }, 00:20:37.535 "dhchap_digests": [ 00:20:37.535 "sha256", 00:20:37.535 "sha384", 00:20:37.535 "sha512" 00:20:37.535 ], 00:20:37.535 "dhchap_dhgroups": [ 00:20:37.535 "null", 00:20:37.535 "ffdhe2048", 00:20:37.535 "ffdhe3072", 00:20:37.535 "ffdhe4096", 00:20:37.535 "ffdhe6144", 00:20:37.535 "ffdhe8192" 00:20:37.535 ] 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "nvmf_set_max_subsystems", 00:20:37.535 "params": { 00:20:37.535 "max_subsystems": 1024 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "nvmf_set_crdt", 00:20:37.535 "params": { 00:20:37.535 "crdt1": 0, 00:20:37.535 "crdt2": 0, 00:20:37.535 "crdt3": 0 00:20:37.535 } 00:20:37.535 }, 00:20:37.535 { 00:20:37.535 "method": "nvmf_create_transport", 00:20:37.535 "params": { 00:20:37.535 "trtype": "TCP", 00:20:37.535 "max_queue_depth": 128, 00:20:37.535 "max_io_qpairs_per_ctrlr": 127, 00:20:37.535 "in_capsule_data_size": 4096, 00:20:37.536 "max_io_size": 131072, 00:20:37.536 "io_unit_size": 131072, 00:20:37.536 "max_aq_depth": 128, 00:20:37.536 "num_shared_buffers": 511, 00:20:37.536 "buf_cache_size": 4294967295, 00:20:37.536 "dif_insert_or_strip": false, 00:20:37.536 "zcopy": false, 00:20:37.536 "c2h_success": false, 00:20:37.536 "sock_priority": 0, 00:20:37.536 "abort_timeout_sec": 1, 00:20:37.536 "ack_timeout": 0, 00:20:37.536 "data_wr_pool_size": 0 00:20:37.536 } 00:20:37.536 }, 00:20:37.536 { 00:20:37.536 "method": "nvmf_create_subsystem", 00:20:37.536 "params": { 00:20:37.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.536 "allow_any_host": false, 00:20:37.536 "serial_number": "SPDK00000000000001", 00:20:37.536 "model_number": "SPDK bdev Controller", 00:20:37.536 "max_namespaces": 10, 00:20:37.536 "min_cntlid": 1, 00:20:37.536 "max_cntlid": 65519, 00:20:37.536 "ana_reporting": false 00:20:37.536 } 00:20:37.536 }, 00:20:37.536 { 00:20:37.536 "method": "nvmf_subsystem_add_host", 00:20:37.536 "params": { 00:20:37.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.536 "host": "nqn.2016-06.io.spdk:host1", 00:20:37.536 "psk": "key0" 00:20:37.536 } 00:20:37.536 }, 00:20:37.536 { 00:20:37.536 "method": "nvmf_subsystem_add_ns", 00:20:37.536 "params": { 00:20:37.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.536 "namespace": { 00:20:37.536 "nsid": 1, 00:20:37.536 "bdev_name": "malloc0", 00:20:37.536 "nguid": "9945CE036884455B8E2535C42BC6F674", 00:20:37.536 "uuid": "9945ce03-6884-455b-8e25-35c42bc6f674", 00:20:37.536 "no_auto_visible": false 00:20:37.536 } 00:20:37.536 } 00:20:37.536 }, 00:20:37.536 { 00:20:37.536 "method": "nvmf_subsystem_add_listener", 00:20:37.536 "params": { 00:20:37.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.536 "listen_address": { 00:20:37.536 "trtype": "TCP", 00:20:37.536 "adrfam": "IPv4", 00:20:37.536 "traddr": "10.0.0.2", 00:20:37.536 "trsvcid": "4420" 00:20:37.536 }, 00:20:37.536 "secure_channel": true 00:20:37.536 } 00:20:37.536 } 00:20:37.536 ] 00:20:37.536 } 00:20:37.536 ] 00:20:37.536 }' 00:20:37.536 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:37.796 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:37.796 "subsystems": [ 00:20:37.796 { 00:20:37.796 "subsystem": "keyring", 00:20:37.796 "config": [ 00:20:37.796 { 00:20:37.796 "method": "keyring_file_add_key", 00:20:37.796 "params": { 00:20:37.796 "name": "key0", 00:20:37.796 "path": "/tmp/tmp.UPgGCcDP1P" 00:20:37.796 } 00:20:37.796 } 00:20:37.796 ] 00:20:37.796 }, 00:20:37.796 { 00:20:37.796 "subsystem": "iobuf", 00:20:37.796 "config": [ 00:20:37.796 { 00:20:37.796 "method": "iobuf_set_options", 00:20:37.796 "params": { 00:20:37.796 "small_pool_count": 8192, 00:20:37.796 "large_pool_count": 1024, 00:20:37.796 "small_bufsize": 8192, 00:20:37.796 "large_bufsize": 135168, 00:20:37.796 "enable_numa": false 00:20:37.796 } 00:20:37.796 } 00:20:37.796 ] 00:20:37.796 }, 00:20:37.796 { 00:20:37.796 "subsystem": "sock", 00:20:37.796 "config": [ 00:20:37.796 { 00:20:37.796 "method": "sock_set_default_impl", 00:20:37.796 "params": { 00:20:37.796 "impl_name": "posix" 00:20:37.796 } 00:20:37.796 }, 00:20:37.796 { 00:20:37.796 "method": "sock_impl_set_options", 00:20:37.796 "params": { 00:20:37.796 "impl_name": "ssl", 00:20:37.796 "recv_buf_size": 4096, 00:20:37.796 "send_buf_size": 4096, 00:20:37.796 "enable_recv_pipe": true, 00:20:37.796 "enable_quickack": false, 00:20:37.796 "enable_placement_id": 0, 00:20:37.796 "enable_zerocopy_send_server": true, 00:20:37.796 "enable_zerocopy_send_client": false, 00:20:37.796 "zerocopy_threshold": 0, 00:20:37.796 "tls_version": 0, 00:20:37.796 "enable_ktls": false 00:20:37.796 } 00:20:37.796 }, 00:20:37.796 { 00:20:37.796 "method": "sock_impl_set_options", 00:20:37.796 "params": { 00:20:37.796 "impl_name": "posix", 00:20:37.796 "recv_buf_size": 2097152, 00:20:37.796 "send_buf_size": 2097152, 00:20:37.796 "enable_recv_pipe": true, 00:20:37.796 "enable_quickack": false, 00:20:37.796 "enable_placement_id": 0, 00:20:37.796 "enable_zerocopy_send_server": true, 00:20:37.796 "enable_zerocopy_send_client": false, 00:20:37.796 "zerocopy_threshold": 0, 00:20:37.796 "tls_version": 0, 00:20:37.796 "enable_ktls": false 00:20:37.796 } 00:20:37.796 } 00:20:37.796 ] 00:20:37.796 }, 00:20:37.796 { 00:20:37.796 "subsystem": "vmd", 00:20:37.796 "config": [] 00:20:37.796 }, 00:20:37.796 { 00:20:37.796 "subsystem": "accel", 00:20:37.796 "config": [ 00:20:37.796 { 00:20:37.796 "method": "accel_set_options", 00:20:37.796 "params": { 00:20:37.796 "small_cache_size": 128, 00:20:37.796 "large_cache_size": 16, 00:20:37.796 "task_count": 2048, 00:20:37.796 "sequence_count": 2048, 00:20:37.796 "buf_count": 2048 00:20:37.797 } 00:20:37.797 } 00:20:37.797 ] 00:20:37.797 }, 00:20:37.797 { 00:20:37.797 "subsystem": "bdev", 00:20:37.797 "config": [ 00:20:37.797 { 00:20:37.797 "method": "bdev_set_options", 00:20:37.797 "params": { 00:20:37.797 "bdev_io_pool_size": 65535, 00:20:37.797 "bdev_io_cache_size": 256, 00:20:37.797 "bdev_auto_examine": true, 00:20:37.797 "iobuf_small_cache_size": 128, 00:20:37.797 "iobuf_large_cache_size": 16 00:20:37.797 } 00:20:37.797 }, 00:20:37.797 { 00:20:37.797 "method": "bdev_raid_set_options", 00:20:37.797 "params": { 00:20:37.797 "process_window_size_kb": 1024, 00:20:37.797 "process_max_bandwidth_mb_sec": 0 00:20:37.797 } 00:20:37.797 }, 00:20:37.797 { 00:20:37.797 "method": "bdev_iscsi_set_options", 00:20:37.797 "params": { 00:20:37.797 "timeout_sec": 30 00:20:37.797 } 00:20:37.797 }, 00:20:37.797 { 00:20:37.797 "method": "bdev_nvme_set_options", 00:20:37.797 "params": { 00:20:37.797 "action_on_timeout": "none", 00:20:37.797 "timeout_us": 0, 00:20:37.797 "timeout_admin_us": 0, 00:20:37.797 "keep_alive_timeout_ms": 10000, 00:20:37.797 "arbitration_burst": 0, 00:20:37.797 "low_priority_weight": 0, 00:20:37.797 "medium_priority_weight": 0, 00:20:37.797 "high_priority_weight": 0, 00:20:37.797 "nvme_adminq_poll_period_us": 10000, 00:20:37.797 "nvme_ioq_poll_period_us": 0, 00:20:37.797 "io_queue_requests": 512, 00:20:37.797 "delay_cmd_submit": true, 00:20:37.797 "transport_retry_count": 4, 00:20:37.797 "bdev_retry_count": 3, 00:20:37.797 "transport_ack_timeout": 0, 00:20:37.797 "ctrlr_loss_timeout_sec": 0, 00:20:37.797 "reconnect_delay_sec": 0, 00:20:37.797 "fast_io_fail_timeout_sec": 0, 00:20:37.797 "disable_auto_failback": false, 00:20:37.797 "generate_uuids": false, 00:20:37.797 "transport_tos": 0, 00:20:37.797 "nvme_error_stat": false, 00:20:37.797 "rdma_srq_size": 0, 00:20:37.797 "io_path_stat": false, 00:20:37.797 "allow_accel_sequence": false, 00:20:37.797 "rdma_max_cq_size": 0, 00:20:37.797 "rdma_cm_event_timeout_ms": 0, 00:20:37.797 "dhchap_digests": [ 00:20:37.797 "sha256", 00:20:37.797 "sha384", 00:20:37.797 "sha512" 00:20:37.797 ], 00:20:37.797 "dhchap_dhgroups": [ 00:20:37.797 "null", 00:20:37.797 "ffdhe2048", 00:20:37.797 "ffdhe3072", 00:20:37.797 "ffdhe4096", 00:20:37.797 "ffdhe6144", 00:20:37.797 "ffdhe8192" 00:20:37.797 ] 00:20:37.797 } 00:20:37.797 }, 00:20:37.797 { 00:20:37.797 "method": "bdev_nvme_attach_controller", 00:20:37.797 "params": { 00:20:37.797 "name": "TLSTEST", 00:20:37.797 "trtype": "TCP", 00:20:37.797 "adrfam": "IPv4", 00:20:37.797 "traddr": "10.0.0.2", 00:20:37.797 "trsvcid": "4420", 00:20:37.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.797 "prchk_reftag": false, 00:20:37.797 "prchk_guard": false, 00:20:37.797 "ctrlr_loss_timeout_sec": 0, 00:20:37.797 "reconnect_delay_sec": 0, 00:20:37.797 "fast_io_fail_timeout_sec": 0, 00:20:37.797 "psk": "key0", 00:20:37.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.797 "hdgst": false, 00:20:37.797 "ddgst": false, 00:20:37.797 "multipath": "multipath" 00:20:37.797 } 00:20:37.797 }, 00:20:37.797 { 00:20:37.797 "method": "bdev_nvme_set_hotplug", 00:20:37.797 "params": { 00:20:37.797 "period_us": 100000, 00:20:37.797 "enable": false 00:20:37.797 } 00:20:37.797 }, 00:20:37.797 { 00:20:37.797 "method": "bdev_wait_for_examine" 00:20:37.797 } 00:20:37.797 ] 00:20:37.797 }, 00:20:37.797 { 00:20:37.797 "subsystem": "nbd", 00:20:37.797 "config": [] 00:20:37.797 } 00:20:37.797 ] 00:20:37.797 }' 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2008868 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2008868 ']' 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2008868 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2008868 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2008868' 00:20:37.797 killing process with pid 2008868 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2008868 00:20:37.797 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.797 00:20:37.797 Latency(us) 00:20:37.797 [2024-11-19T17:19:39.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.797 [2024-11-19T17:19:39.268Z] =================================================================================================================== 00:20:37.797 [2024-11-19T17:19:39.268Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2008868 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2008486 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2008486 ']' 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2008486 00:20:37.797 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2008486 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2008486' 00:20:38.059 killing process with pid 2008486 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2008486 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2008486 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.059 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:38.059 "subsystems": [ 00:20:38.059 { 00:20:38.059 "subsystem": "keyring", 00:20:38.059 "config": [ 00:20:38.059 { 00:20:38.059 "method": "keyring_file_add_key", 00:20:38.059 "params": { 00:20:38.059 "name": "key0", 00:20:38.059 "path": "/tmp/tmp.UPgGCcDP1P" 00:20:38.059 } 00:20:38.059 } 00:20:38.059 ] 00:20:38.059 }, 00:20:38.059 { 00:20:38.059 "subsystem": "iobuf", 00:20:38.059 "config": [ 00:20:38.059 { 00:20:38.059 "method": "iobuf_set_options", 00:20:38.059 "params": { 00:20:38.059 "small_pool_count": 8192, 00:20:38.059 "large_pool_count": 1024, 00:20:38.059 "small_bufsize": 8192, 00:20:38.059 "large_bufsize": 135168, 00:20:38.059 "enable_numa": false 00:20:38.059 } 00:20:38.059 } 00:20:38.059 ] 00:20:38.059 }, 00:20:38.059 { 00:20:38.059 "subsystem": "sock", 00:20:38.059 "config": [ 00:20:38.059 { 00:20:38.059 "method": "sock_set_default_impl", 00:20:38.059 "params": { 00:20:38.059 "impl_name": "posix" 00:20:38.059 } 00:20:38.059 }, 00:20:38.059 { 00:20:38.059 "method": "sock_impl_set_options", 00:20:38.059 "params": { 00:20:38.059 "impl_name": "ssl", 00:20:38.059 "recv_buf_size": 4096, 00:20:38.059 "send_buf_size": 4096, 00:20:38.059 "enable_recv_pipe": true, 00:20:38.059 "enable_quickack": false, 00:20:38.059 "enable_placement_id": 0, 00:20:38.059 "enable_zerocopy_send_server": true, 00:20:38.059 "enable_zerocopy_send_client": false, 00:20:38.059 "zerocopy_threshold": 0, 00:20:38.059 "tls_version": 0, 00:20:38.059 "enable_ktls": false 00:20:38.059 } 00:20:38.059 }, 00:20:38.059 { 00:20:38.059 "method": "sock_impl_set_options", 00:20:38.059 "params": { 00:20:38.059 "impl_name": "posix", 00:20:38.059 "recv_buf_size": 2097152, 00:20:38.059 "send_buf_size": 2097152, 00:20:38.059 "enable_recv_pipe": true, 00:20:38.059 "enable_quickack": false, 00:20:38.059 "enable_placement_id": 0, 00:20:38.059 "enable_zerocopy_send_server": true, 00:20:38.059 "enable_zerocopy_send_client": false, 00:20:38.059 "zerocopy_threshold": 0, 00:20:38.059 "tls_version": 0, 00:20:38.059 "enable_ktls": false 00:20:38.059 } 00:20:38.059 } 00:20:38.059 ] 00:20:38.059 }, 00:20:38.059 { 00:20:38.059 "subsystem": "vmd", 00:20:38.059 "config": [] 00:20:38.059 }, 00:20:38.059 { 00:20:38.059 "subsystem": "accel", 00:20:38.059 "config": [ 00:20:38.059 { 00:20:38.059 "method": "accel_set_options", 00:20:38.059 "params": { 00:20:38.059 "small_cache_size": 128, 00:20:38.059 "large_cache_size": 16, 00:20:38.059 "task_count": 2048, 00:20:38.059 "sequence_count": 2048, 00:20:38.059 "buf_count": 2048 00:20:38.059 } 00:20:38.059 } 00:20:38.059 ] 00:20:38.059 }, 00:20:38.059 { 00:20:38.059 "subsystem": "bdev", 00:20:38.059 "config": [ 00:20:38.059 { 00:20:38.059 "method": "bdev_set_options", 00:20:38.059 "params": { 00:20:38.059 "bdev_io_pool_size": 65535, 00:20:38.059 "bdev_io_cache_size": 256, 00:20:38.059 "bdev_auto_examine": true, 00:20:38.059 "iobuf_small_cache_size": 128, 00:20:38.060 "iobuf_large_cache_size": 16 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "bdev_raid_set_options", 00:20:38.060 "params": { 00:20:38.060 "process_window_size_kb": 1024, 00:20:38.060 "process_max_bandwidth_mb_sec": 0 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "bdev_iscsi_set_options", 00:20:38.060 "params": { 00:20:38.060 "timeout_sec": 30 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "bdev_nvme_set_options", 00:20:38.060 "params": { 00:20:38.060 "action_on_timeout": "none", 00:20:38.060 "timeout_us": 0, 00:20:38.060 "timeout_admin_us": 0, 00:20:38.060 "keep_alive_timeout_ms": 10000, 00:20:38.060 "arbitration_burst": 0, 00:20:38.060 "low_priority_weight": 0, 00:20:38.060 "medium_priority_weight": 0, 00:20:38.060 "high_priority_weight": 0, 00:20:38.060 "nvme_adminq_poll_period_us": 10000, 00:20:38.060 "nvme_ioq_poll_period_us": 0, 00:20:38.060 "io_queue_requests": 0, 00:20:38.060 "delay_cmd_submit": true, 00:20:38.060 "transport_retry_count": 4, 00:20:38.060 "bdev_retry_count": 3, 00:20:38.060 "transport_ack_timeout": 0, 00:20:38.060 "ctrlr_loss_timeout_sec": 0, 00:20:38.060 "reconnect_delay_sec": 0, 00:20:38.060 "fast_io_fail_timeout_sec": 0, 00:20:38.060 "disable_auto_failback": false, 00:20:38.060 "generate_uuids": false, 00:20:38.060 "transport_tos": 0, 00:20:38.060 "nvme_error_stat": false, 00:20:38.060 "rdma_srq_size": 0, 00:20:38.060 "io_path_stat": false, 00:20:38.060 "allow_accel_sequence": false, 00:20:38.060 "rdma_max_cq_size": 0, 00:20:38.060 "rdma_cm_event_timeout_ms": 0, 00:20:38.060 "dhchap_digests": [ 00:20:38.060 "sha256", 00:20:38.060 "sha384", 00:20:38.060 "sha512" 00:20:38.060 ], 00:20:38.060 "dhchap_dhgroups": [ 00:20:38.060 "null", 00:20:38.060 "ffdhe2048", 00:20:38.060 "ffdhe3072", 00:20:38.060 "ffdhe4096", 00:20:38.060 "ffdhe6144", 00:20:38.060 "ffdhe8192" 00:20:38.060 ] 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "bdev_nvme_set_hotplug", 00:20:38.060 "params": { 00:20:38.060 "period_us": 100000, 00:20:38.060 "enable": false 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "bdev_malloc_create", 00:20:38.060 "params": { 00:20:38.060 "name": "malloc0", 00:20:38.060 "num_blocks": 8192, 00:20:38.060 "block_size": 4096, 00:20:38.060 "physical_block_size": 4096, 00:20:38.060 "uuid": "9945ce03-6884-455b-8e25-35c42bc6f674", 00:20:38.060 "optimal_io_boundary": 0, 00:20:38.060 "md_size": 0, 00:20:38.060 "dif_type": 0, 00:20:38.060 "dif_is_head_of_md": false, 00:20:38.060 "dif_pi_format": 0 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "bdev_wait_for_examine" 00:20:38.060 } 00:20:38.060 ] 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "subsystem": "nbd", 00:20:38.060 "config": [] 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "subsystem": "scheduler", 00:20:38.060 "config": [ 00:20:38.060 { 00:20:38.060 "method": "framework_set_scheduler", 00:20:38.060 "params": { 00:20:38.060 "name": "static" 00:20:38.060 } 00:20:38.060 } 00:20:38.060 ] 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "subsystem": "nvmf", 00:20:38.060 "config": [ 00:20:38.060 { 00:20:38.060 "method": "nvmf_set_config", 00:20:38.060 "params": { 00:20:38.060 "discovery_filter": "match_any", 00:20:38.060 "admin_cmd_passthru": { 00:20:38.060 "identify_ctrlr": false 00:20:38.060 }, 00:20:38.060 "dhchap_digests": [ 00:20:38.060 "sha256", 00:20:38.060 "sha384", 00:20:38.060 "sha512" 00:20:38.060 ], 00:20:38.060 "dhchap_dhgroups": [ 00:20:38.060 "null", 00:20:38.060 "ffdhe2048", 00:20:38.060 "ffdhe3072", 00:20:38.060 "ffdhe4096", 00:20:38.060 "ffdhe6144", 00:20:38.060 "ffdhe8192" 00:20:38.060 ] 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "nvmf_set_max_subsystems", 00:20:38.060 "params": { 00:20:38.060 "max_subsystems": 1024 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "nvmf_set_crdt", 00:20:38.060 "params": { 00:20:38.060 "crdt1": 0, 00:20:38.060 "crdt2": 0, 00:20:38.060 "crdt3": 0 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "nvmf_create_transport", 00:20:38.060 "params": { 00:20:38.060 "trtype": "TCP", 00:20:38.060 "max_queue_depth": 128, 00:20:38.060 "max_io_qpairs_per_ctrlr": 127, 00:20:38.060 "in_capsule_data_size": 4096, 00:20:38.060 "max_io_size": 131072, 00:20:38.060 "io_unit_size": 131072, 00:20:38.060 "max_aq_depth": 128, 00:20:38.060 "num_shared_buffers": 511, 00:20:38.060 "buf_cache_size": 4294967295, 00:20:38.060 "dif_insert_or_strip": false, 00:20:38.060 "zcopy": false, 00:20:38.060 "c2h_success": false, 00:20:38.060 "sock_priority": 0, 00:20:38.060 "abort_timeout_sec": 1, 00:20:38.060 "ack_timeout": 0, 00:20:38.060 "data_wr_pool_size": 0 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "nvmf_create_subsystem", 00:20:38.060 "params": { 00:20:38.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.060 "allow_any_host": false, 00:20:38.060 "serial_number": "SPDK00000000000001", 00:20:38.060 "model_number": "SPDK bdev Controller", 00:20:38.060 "max_namespaces": 10, 00:20:38.060 "min_cntlid": 1, 00:20:38.060 "max_cntlid": 65519, 00:20:38.060 "ana_reporting": false 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "nvmf_subsystem_add_host", 00:20:38.060 "params": { 00:20:38.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.060 "host": "nqn.2016-06.io.spdk:host1", 00:20:38.060 "psk": "key0" 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "nvmf_subsystem_add_ns", 00:20:38.060 "params": { 00:20:38.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.060 "namespace": { 00:20:38.060 "nsid": 1, 00:20:38.060 "bdev_name": "malloc0", 00:20:38.060 "nguid": "9945CE036884455B8E2535C42BC6F674", 00:20:38.060 "uuid": "9945ce03-6884-455b-8e25-35c42bc6f674", 00:20:38.060 "no_auto_visible": false 00:20:38.060 } 00:20:38.060 } 00:20:38.060 }, 00:20:38.060 { 00:20:38.060 "method": "nvmf_subsystem_add_listener", 00:20:38.060 "params": { 00:20:38.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.060 "listen_address": { 00:20:38.060 "trtype": "TCP", 00:20:38.060 "adrfam": "IPv4", 00:20:38.060 "traddr": "10.0.0.2", 00:20:38.060 "trsvcid": "4420" 00:20:38.060 }, 00:20:38.060 "secure_channel": true 00:20:38.060 } 00:20:38.060 } 00:20:38.060 ] 00:20:38.060 } 00:20:38.060 ] 00:20:38.060 }' 00:20:38.060 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2009401 00:20:38.060 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2009401 00:20:38.060 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:38.060 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2009401 ']' 00:20:38.060 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.060 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.060 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.060 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.060 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.061 [2024-11-19 18:19:39.493202] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:38.061 [2024-11-19 18:19:39.493259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.321 [2024-11-19 18:19:39.581793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.321 [2024-11-19 18:19:39.611526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.321 [2024-11-19 18:19:39.611551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.321 [2024-11-19 18:19:39.611557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.321 [2024-11-19 18:19:39.611562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.321 [2024-11-19 18:19:39.611566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.321 [2024-11-19 18:19:39.612013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.580 [2024-11-19 18:19:39.804039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.580 [2024-11-19 18:19:39.836067] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.580 [2024-11-19 18:19:39.836265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.840 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.840 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:38.840 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.840 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.840 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2009554 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2009554 /var/tmp/bdevperf.sock 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2009554 ']' 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.101 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:39.101 "subsystems": [ 00:20:39.101 { 00:20:39.101 "subsystem": "keyring", 00:20:39.101 "config": [ 00:20:39.101 { 00:20:39.101 "method": "keyring_file_add_key", 00:20:39.101 "params": { 00:20:39.101 "name": "key0", 00:20:39.101 "path": "/tmp/tmp.UPgGCcDP1P" 00:20:39.101 } 00:20:39.101 } 00:20:39.101 ] 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "subsystem": "iobuf", 00:20:39.101 "config": [ 00:20:39.101 { 00:20:39.101 "method": "iobuf_set_options", 00:20:39.101 "params": { 00:20:39.101 "small_pool_count": 8192, 00:20:39.101 "large_pool_count": 1024, 00:20:39.101 "small_bufsize": 8192, 00:20:39.101 "large_bufsize": 135168, 00:20:39.101 "enable_numa": false 00:20:39.101 } 00:20:39.101 } 00:20:39.101 ] 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "subsystem": "sock", 00:20:39.101 "config": [ 00:20:39.101 { 00:20:39.101 "method": "sock_set_default_impl", 00:20:39.101 "params": { 00:20:39.101 "impl_name": "posix" 00:20:39.101 } 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "method": "sock_impl_set_options", 00:20:39.101 "params": { 00:20:39.101 "impl_name": "ssl", 00:20:39.101 "recv_buf_size": 4096, 00:20:39.101 "send_buf_size": 4096, 00:20:39.101 "enable_recv_pipe": true, 00:20:39.101 "enable_quickack": false, 00:20:39.101 "enable_placement_id": 0, 00:20:39.101 "enable_zerocopy_send_server": true, 00:20:39.101 "enable_zerocopy_send_client": false, 00:20:39.101 "zerocopy_threshold": 0, 00:20:39.101 "tls_version": 0, 00:20:39.101 "enable_ktls": false 00:20:39.101 } 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "method": "sock_impl_set_options", 00:20:39.101 "params": { 00:20:39.101 "impl_name": "posix", 00:20:39.101 "recv_buf_size": 2097152, 00:20:39.101 "send_buf_size": 2097152, 00:20:39.101 "enable_recv_pipe": true, 00:20:39.101 "enable_quickack": false, 00:20:39.101 "enable_placement_id": 0, 00:20:39.101 "enable_zerocopy_send_server": true, 00:20:39.101 "enable_zerocopy_send_client": false, 00:20:39.101 "zerocopy_threshold": 0, 00:20:39.101 "tls_version": 0, 00:20:39.101 "enable_ktls": false 00:20:39.101 } 00:20:39.101 } 00:20:39.101 ] 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "subsystem": "vmd", 00:20:39.101 "config": [] 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "subsystem": "accel", 00:20:39.101 "config": [ 00:20:39.101 { 00:20:39.101 "method": "accel_set_options", 00:20:39.101 "params": { 00:20:39.101 "small_cache_size": 128, 00:20:39.101 "large_cache_size": 16, 00:20:39.101 "task_count": 2048, 00:20:39.101 "sequence_count": 2048, 00:20:39.101 "buf_count": 2048 00:20:39.101 } 00:20:39.101 } 00:20:39.101 ] 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "subsystem": "bdev", 00:20:39.101 "config": [ 00:20:39.101 { 00:20:39.101 "method": "bdev_set_options", 00:20:39.101 "params": { 00:20:39.101 "bdev_io_pool_size": 65535, 00:20:39.101 "bdev_io_cache_size": 256, 00:20:39.101 "bdev_auto_examine": true, 00:20:39.101 "iobuf_small_cache_size": 128, 00:20:39.101 "iobuf_large_cache_size": 16 00:20:39.101 } 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "method": "bdev_raid_set_options", 00:20:39.101 "params": { 00:20:39.101 "process_window_size_kb": 1024, 00:20:39.101 "process_max_bandwidth_mb_sec": 0 00:20:39.101 } 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "method": "bdev_iscsi_set_options", 00:20:39.101 "params": { 00:20:39.101 "timeout_sec": 30 00:20:39.101 } 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "method": "bdev_nvme_set_options", 00:20:39.101 "params": { 00:20:39.101 "action_on_timeout": "none", 00:20:39.101 "timeout_us": 0, 00:20:39.101 "timeout_admin_us": 0, 00:20:39.101 "keep_alive_timeout_ms": 10000, 00:20:39.101 "arbitration_burst": 0, 00:20:39.101 "low_priority_weight": 0, 00:20:39.101 "medium_priority_weight": 0, 00:20:39.101 "high_priority_weight": 0, 00:20:39.101 "nvme_adminq_poll_period_us": 10000, 00:20:39.101 "nvme_ioq_poll_period_us": 0, 00:20:39.101 "io_queue_requests": 512, 00:20:39.101 "delay_cmd_submit": true, 00:20:39.101 "transport_retry_count": 4, 00:20:39.101 "bdev_retry_count": 3, 00:20:39.101 "transport_ack_timeout": 0, 00:20:39.101 "ctrlr_loss_timeout_sec": 0, 00:20:39.101 "reconnect_delay_sec": 0, 00:20:39.101 "fast_io_fail_timeout_sec": 0, 00:20:39.101 "disable_auto_failback": false, 00:20:39.101 "generate_uuids": false, 00:20:39.101 "transport_tos": 0, 00:20:39.101 "nvme_error_stat": false, 00:20:39.101 "rdma_srq_size": 0, 00:20:39.101 "io_path_stat": false, 00:20:39.101 "allow_accel_sequence": false, 00:20:39.101 "rdma_max_cq_size": 0, 00:20:39.101 "rdma_cm_event_timeout_ms": 0, 00:20:39.101 "dhchap_digests": [ 00:20:39.101 "sha256", 00:20:39.101 "sha384", 00:20:39.101 "sha512" 00:20:39.101 ], 00:20:39.101 "dhchap_dhgroups": [ 00:20:39.101 "null", 00:20:39.101 "ffdhe2048", 00:20:39.101 "ffdhe3072", 00:20:39.101 "ffdhe4096", 00:20:39.101 "ffdhe6144", 00:20:39.101 "ffdhe8192" 00:20:39.101 ] 00:20:39.101 } 00:20:39.101 }, 00:20:39.101 { 00:20:39.101 "method": "bdev_nvme_attach_controller", 00:20:39.101 "params": { 00:20:39.101 "name": "TLSTEST", 00:20:39.101 "trtype": "TCP", 00:20:39.101 "adrfam": "IPv4", 00:20:39.101 "traddr": "10.0.0.2", 00:20:39.101 "trsvcid": "4420", 00:20:39.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.101 "prchk_reftag": false, 00:20:39.101 "prchk_guard": false, 00:20:39.101 "ctrlr_loss_timeout_sec": 0, 00:20:39.101 "reconnect_delay_sec": 0, 00:20:39.102 "fast_io_fail_timeout_sec": 0, 00:20:39.102 "psk": "key0", 00:20:39.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.102 "hdgst": false, 00:20:39.102 "ddgst": false, 00:20:39.102 "multipath": "multipath" 00:20:39.102 } 00:20:39.102 }, 00:20:39.102 { 00:20:39.102 "method": "bdev_nvme_set_hotplug", 00:20:39.102 "params": { 00:20:39.102 "period_us": 100000, 00:20:39.102 "enable": false 00:20:39.102 } 00:20:39.102 }, 00:20:39.102 { 00:20:39.102 "method": "bdev_wait_for_examine" 00:20:39.102 } 00:20:39.102 ] 00:20:39.102 }, 00:20:39.102 { 00:20:39.102 "subsystem": "nbd", 00:20:39.102 "config": [] 00:20:39.102 } 00:20:39.102 ] 00:20:39.102 }' 00:20:39.102 [2024-11-19 18:19:40.370202] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:39.102 [2024-11-19 18:19:40.370256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009554 ] 00:20:39.102 [2024-11-19 18:19:40.456156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.102 [2024-11-19 18:19:40.485002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.361 [2024-11-19 18:19:40.619083] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.932 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.932 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.932 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:39.932 Running I/O for 10 seconds... 00:20:41.815 5240.00 IOPS, 20.47 MiB/s [2024-11-19T17:19:44.668Z] 4877.00 IOPS, 19.05 MiB/s [2024-11-19T17:19:45.609Z] 5413.00 IOPS, 21.14 MiB/s [2024-11-19T17:19:46.549Z] 5256.75 IOPS, 20.53 MiB/s [2024-11-19T17:19:47.488Z] 5292.20 IOPS, 20.67 MiB/s [2024-11-19T17:19:48.428Z] 5365.00 IOPS, 20.96 MiB/s [2024-11-19T17:19:49.365Z] 5497.29 IOPS, 21.47 MiB/s [2024-11-19T17:19:50.304Z] 5447.00 IOPS, 21.28 MiB/s [2024-11-19T17:19:51.688Z] 5444.33 IOPS, 21.27 MiB/s [2024-11-19T17:19:51.688Z] 5492.20 IOPS, 21.45 MiB/s 00:20:50.217 Latency(us) 00:20:50.217 [2024-11-19T17:19:51.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.217 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.217 Verification LBA range: start 0x0 length 0x2000 00:20:50.217 TLSTESTn1 : 10.02 5496.17 21.47 0.00 0.00 23255.91 4696.75 31238.83 00:20:50.217 [2024-11-19T17:19:51.688Z] =================================================================================================================== 00:20:50.217 [2024-11-19T17:19:51.688Z] Total : 5496.17 21.47 0.00 0.00 23255.91 4696.75 31238.83 00:20:50.217 { 00:20:50.217 "results": [ 00:20:50.217 { 00:20:50.217 "job": "TLSTESTn1", 00:20:50.217 "core_mask": "0x4", 00:20:50.217 "workload": "verify", 00:20:50.217 "status": "finished", 00:20:50.217 "verify_range": { 00:20:50.217 "start": 0, 00:20:50.217 "length": 8192 00:20:50.217 }, 00:20:50.217 "queue_depth": 128, 00:20:50.217 "io_size": 4096, 00:20:50.217 "runtime": 10.016062, 00:20:50.217 "iops": 5496.17204845577, 00:20:50.217 "mibps": 21.46942206428035, 00:20:50.217 "io_failed": 0, 00:20:50.217 "io_timeout": 0, 00:20:50.217 "avg_latency_us": 23255.90818334847, 00:20:50.217 "min_latency_us": 4696.746666666667, 00:20:50.217 "max_latency_us": 31238.826666666668 00:20:50.217 } 00:20:50.217 ], 00:20:50.217 "core_count": 1 00:20:50.217 } 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2009554 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2009554 ']' 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2009554 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2009554 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2009554' 00:20:50.217 killing process with pid 2009554 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2009554 00:20:50.217 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.217 00:20:50.217 Latency(us) 00:20:50.217 [2024-11-19T17:19:51.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.217 [2024-11-19T17:19:51.688Z] =================================================================================================================== 00:20:50.217 [2024-11-19T17:19:51.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2009554 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2009401 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2009401 ']' 00:20:50.217 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2009401 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2009401 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2009401' 00:20:50.218 killing process with pid 2009401 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2009401 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2009401 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2011806 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2011806 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2011806 ']' 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.218 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.478 [2024-11-19 18:19:51.722836] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:50.478 [2024-11-19 18:19:51.722888] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.478 [2024-11-19 18:19:51.818295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.478 [2024-11-19 18:19:51.853405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.478 [2024-11-19 18:19:51.853441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.478 [2024-11-19 18:19:51.853449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.478 [2024-11-19 18:19:51.853456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.478 [2024-11-19 18:19:51.853462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.478 [2024-11-19 18:19:51.854055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.UPgGCcDP1P 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UPgGCcDP1P 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:51.419 [2024-11-19 18:19:52.748304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.419 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:51.679 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:51.940 [2024-11-19 18:19:53.149305] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.940 [2024-11-19 18:19:53.149665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.940 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:51.940 malloc0 00:20:51.940 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:52.201 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UPgGCcDP1P 00:20:52.461 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2012263 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2012263 /var/tmp/bdevperf.sock 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2012263 ']' 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.721 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.721 [2024-11-19 18:19:54.034310] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:52.721 [2024-11-19 18:19:54.034382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012263 ] 00:20:52.721 [2024-11-19 18:19:54.121111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.721 [2024-11-19 18:19:54.154794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.660 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.660 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:53.660 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UPgGCcDP1P 00:20:53.660 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:53.920 [2024-11-19 18:19:55.180947] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.920 nvme0n1 00:20:53.920 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:53.920 Running I/O for 1 seconds... 00:20:55.302 5526.00 IOPS, 21.59 MiB/s 00:20:55.302 Latency(us) 00:20:55.302 [2024-11-19T17:19:56.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.302 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:55.302 Verification LBA range: start 0x0 length 0x2000 00:20:55.302 nvme0n1 : 1.05 5382.48 21.03 0.00 0.00 23249.73 8628.91 46530.56 00:20:55.302 [2024-11-19T17:19:56.773Z] =================================================================================================================== 00:20:55.302 [2024-11-19T17:19:56.773Z] Total : 5382.48 21.03 0.00 0.00 23249.73 8628.91 46530.56 00:20:55.302 { 00:20:55.302 "results": [ 00:20:55.302 { 00:20:55.302 "job": "nvme0n1", 00:20:55.302 "core_mask": "0x2", 00:20:55.302 "workload": "verify", 00:20:55.302 "status": "finished", 00:20:55.302 "verify_range": { 00:20:55.302 "start": 0, 00:20:55.302 "length": 8192 00:20:55.302 }, 00:20:55.302 "queue_depth": 128, 00:20:55.302 "io_size": 4096, 00:20:55.302 "runtime": 1.05063, 00:20:55.302 "iops": 5382.484794837384, 00:20:55.302 "mibps": 21.02533122983353, 00:20:55.302 "io_failed": 0, 00:20:55.302 "io_timeout": 0, 00:20:55.302 "avg_latency_us": 23249.726547597995, 00:20:55.302 "min_latency_us": 8628.906666666666, 00:20:55.302 "max_latency_us": 46530.56 00:20:55.302 } 00:20:55.302 ], 00:20:55.302 "core_count": 1 00:20:55.302 } 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2012263 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2012263 ']' 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2012263 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012263 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012263' 00:20:55.302 killing process with pid 2012263 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2012263 00:20:55.302 Received shutdown signal, test time was about 1.000000 seconds 00:20:55.302 00:20:55.302 Latency(us) 00:20:55.302 [2024-11-19T17:19:56.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.302 [2024-11-19T17:19:56.773Z] =================================================================================================================== 00:20:55.302 [2024-11-19T17:19:56.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2012263 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2011806 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2011806 ']' 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2011806 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2011806 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2011806' 00:20:55.302 killing process with pid 2011806 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2011806 00:20:55.302 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2011806 00:20:55.563 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:55.563 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.563 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.563 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.563 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2012805 00:20:55.563 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2012805 00:20:55.564 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:55.564 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2012805 ']' 00:20:55.564 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.564 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.564 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.564 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.564 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.564 [2024-11-19 18:19:56.865345] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:55.564 [2024-11-19 18:19:56.865404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.564 [2024-11-19 18:19:56.961886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.564 [2024-11-19 18:19:57.011905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.564 [2024-11-19 18:19:57.011964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.564 [2024-11-19 18:19:57.011973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.564 [2024-11-19 18:19:57.011980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.564 [2024-11-19 18:19:57.011987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.564 [2024-11-19 18:19:57.012747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.518 [2024-11-19 18:19:57.743404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.518 malloc0 00:20:56.518 [2024-11-19 18:19:57.773501] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.518 [2024-11-19 18:19:57.773832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.518 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2012971 00:20:56.519 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2012971 /var/tmp/bdevperf.sock 00:20:56.519 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:56.519 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2012971 ']' 00:20:56.519 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.519 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.519 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.519 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.519 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.519 [2024-11-19 18:19:57.856825] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:56.519 [2024-11-19 18:19:57.856893] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012971 ] 00:20:56.519 [2024-11-19 18:19:57.942922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.519 [2024-11-19 18:19:57.977090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.458 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.458 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:57.458 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UPgGCcDP1P 00:20:57.458 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:57.719 [2024-11-19 18:19:58.946991] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:57.719 nvme0n1 00:20:57.719 18:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.719 Running I/O for 1 seconds... 00:20:58.929 6400.00 IOPS, 25.00 MiB/s 00:20:58.929 Latency(us) 00:20:58.929 [2024-11-19T17:20:00.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.929 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:58.929 Verification LBA range: start 0x0 length 0x2000 00:20:58.929 nvme0n1 : 1.01 6452.62 25.21 0.00 0.00 19715.94 4587.52 20971.52 00:20:58.929 [2024-11-19T17:20:00.400Z] =================================================================================================================== 00:20:58.929 [2024-11-19T17:20:00.400Z] Total : 6452.62 25.21 0.00 0.00 19715.94 4587.52 20971.52 00:20:58.929 { 00:20:58.929 "results": [ 00:20:58.929 { 00:20:58.929 "job": "nvme0n1", 00:20:58.929 "core_mask": "0x2", 00:20:58.929 "workload": "verify", 00:20:58.929 "status": "finished", 00:20:58.929 "verify_range": { 00:20:58.929 "start": 0, 00:20:58.929 "length": 8192 00:20:58.929 }, 00:20:58.929 "queue_depth": 128, 00:20:58.929 "io_size": 4096, 00:20:58.929 "runtime": 1.011682, 00:20:58.929 "iops": 6452.620487465429, 00:20:58.929 "mibps": 25.20554877916183, 00:20:58.929 "io_failed": 0, 00:20:58.929 "io_timeout": 0, 00:20:58.929 "avg_latency_us": 19715.944575163398, 00:20:58.929 "min_latency_us": 4587.52, 00:20:58.929 "max_latency_us": 20971.52 00:20:58.929 } 00:20:58.929 ], 00:20:58.929 "core_count": 1 00:20:58.929 } 00:20:58.929 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:58.929 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.929 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.929 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.929 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:58.929 "subsystems": [ 00:20:58.929 { 00:20:58.929 "subsystem": "keyring", 00:20:58.929 "config": [ 00:20:58.929 { 00:20:58.929 "method": "keyring_file_add_key", 00:20:58.929 "params": { 00:20:58.929 "name": "key0", 00:20:58.929 "path": "/tmp/tmp.UPgGCcDP1P" 00:20:58.929 } 00:20:58.929 } 00:20:58.929 ] 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "subsystem": "iobuf", 00:20:58.929 "config": [ 00:20:58.929 { 00:20:58.929 "method": "iobuf_set_options", 00:20:58.929 "params": { 00:20:58.929 "small_pool_count": 8192, 00:20:58.929 "large_pool_count": 1024, 00:20:58.929 "small_bufsize": 8192, 00:20:58.929 "large_bufsize": 135168, 00:20:58.929 "enable_numa": false 00:20:58.929 } 00:20:58.929 } 00:20:58.929 ] 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "subsystem": "sock", 00:20:58.929 "config": [ 00:20:58.929 { 00:20:58.929 "method": "sock_set_default_impl", 00:20:58.929 "params": { 00:20:58.929 "impl_name": "posix" 00:20:58.929 } 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "method": "sock_impl_set_options", 00:20:58.929 "params": { 00:20:58.929 "impl_name": "ssl", 00:20:58.929 "recv_buf_size": 4096, 00:20:58.929 "send_buf_size": 4096, 00:20:58.929 "enable_recv_pipe": true, 00:20:58.929 "enable_quickack": false, 00:20:58.929 "enable_placement_id": 0, 00:20:58.929 "enable_zerocopy_send_server": true, 00:20:58.929 "enable_zerocopy_send_client": false, 00:20:58.929 "zerocopy_threshold": 0, 00:20:58.929 "tls_version": 0, 00:20:58.929 "enable_ktls": false 00:20:58.929 } 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "method": "sock_impl_set_options", 00:20:58.929 "params": { 00:20:58.929 "impl_name": "posix", 00:20:58.929 "recv_buf_size": 2097152, 00:20:58.929 "send_buf_size": 2097152, 00:20:58.929 "enable_recv_pipe": true, 00:20:58.929 "enable_quickack": false, 00:20:58.929 "enable_placement_id": 0, 00:20:58.929 "enable_zerocopy_send_server": true, 00:20:58.929 "enable_zerocopy_send_client": false, 00:20:58.929 "zerocopy_threshold": 0, 00:20:58.929 "tls_version": 0, 00:20:58.929 "enable_ktls": false 00:20:58.929 } 00:20:58.929 } 00:20:58.929 ] 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "subsystem": "vmd", 00:20:58.929 "config": [] 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "subsystem": "accel", 00:20:58.929 "config": [ 00:20:58.929 { 00:20:58.929 "method": "accel_set_options", 00:20:58.929 "params": { 00:20:58.929 "small_cache_size": 128, 00:20:58.929 "large_cache_size": 16, 00:20:58.929 "task_count": 2048, 00:20:58.929 "sequence_count": 2048, 00:20:58.929 "buf_count": 2048 00:20:58.929 } 00:20:58.929 } 00:20:58.929 ] 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "subsystem": "bdev", 00:20:58.929 "config": [ 00:20:58.929 { 00:20:58.929 "method": "bdev_set_options", 00:20:58.929 "params": { 00:20:58.929 "bdev_io_pool_size": 65535, 00:20:58.929 "bdev_io_cache_size": 256, 00:20:58.929 "bdev_auto_examine": true, 00:20:58.929 "iobuf_small_cache_size": 128, 00:20:58.929 "iobuf_large_cache_size": 16 00:20:58.929 } 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "method": "bdev_raid_set_options", 00:20:58.929 "params": { 00:20:58.929 "process_window_size_kb": 1024, 00:20:58.929 "process_max_bandwidth_mb_sec": 0 00:20:58.929 } 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "method": "bdev_iscsi_set_options", 00:20:58.929 "params": { 00:20:58.929 "timeout_sec": 30 00:20:58.929 } 00:20:58.929 }, 00:20:58.929 { 00:20:58.929 "method": "bdev_nvme_set_options", 00:20:58.929 "params": { 00:20:58.929 "action_on_timeout": "none", 00:20:58.929 "timeout_us": 0, 00:20:58.929 "timeout_admin_us": 0, 00:20:58.929 "keep_alive_timeout_ms": 10000, 00:20:58.929 "arbitration_burst": 0, 00:20:58.929 "low_priority_weight": 0, 00:20:58.929 "medium_priority_weight": 0, 00:20:58.929 "high_priority_weight": 0, 00:20:58.930 "nvme_adminq_poll_period_us": 10000, 00:20:58.930 "nvme_ioq_poll_period_us": 0, 00:20:58.930 "io_queue_requests": 0, 00:20:58.930 "delay_cmd_submit": true, 00:20:58.930 "transport_retry_count": 4, 00:20:58.930 "bdev_retry_count": 3, 00:20:58.930 "transport_ack_timeout": 0, 00:20:58.930 "ctrlr_loss_timeout_sec": 0, 00:20:58.930 "reconnect_delay_sec": 0, 00:20:58.930 "fast_io_fail_timeout_sec": 0, 00:20:58.930 "disable_auto_failback": false, 00:20:58.930 "generate_uuids": false, 00:20:58.930 "transport_tos": 0, 00:20:58.930 "nvme_error_stat": false, 00:20:58.930 "rdma_srq_size": 0, 00:20:58.930 "io_path_stat": false, 00:20:58.930 "allow_accel_sequence": false, 00:20:58.930 "rdma_max_cq_size": 0, 00:20:58.930 "rdma_cm_event_timeout_ms": 0, 00:20:58.930 "dhchap_digests": [ 00:20:58.930 "sha256", 00:20:58.930 "sha384", 00:20:58.930 "sha512" 00:20:58.930 ], 00:20:58.930 "dhchap_dhgroups": [ 00:20:58.930 "null", 00:20:58.930 "ffdhe2048", 00:20:58.930 "ffdhe3072", 00:20:58.930 "ffdhe4096", 00:20:58.930 "ffdhe6144", 00:20:58.930 "ffdhe8192" 00:20:58.930 ] 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "bdev_nvme_set_hotplug", 00:20:58.930 "params": { 00:20:58.930 "period_us": 100000, 00:20:58.930 "enable": false 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "bdev_malloc_create", 00:20:58.930 "params": { 00:20:58.930 "name": "malloc0", 00:20:58.930 "num_blocks": 8192, 00:20:58.930 "block_size": 4096, 00:20:58.930 "physical_block_size": 4096, 00:20:58.930 "uuid": "c5f7dcc7-6182-4c11-a726-dd85d11b9b31", 00:20:58.930 "optimal_io_boundary": 0, 00:20:58.930 "md_size": 0, 00:20:58.930 "dif_type": 0, 00:20:58.930 "dif_is_head_of_md": false, 00:20:58.930 "dif_pi_format": 0 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "bdev_wait_for_examine" 00:20:58.930 } 00:20:58.930 ] 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "subsystem": "nbd", 00:20:58.930 "config": [] 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "subsystem": "scheduler", 00:20:58.930 "config": [ 00:20:58.930 { 00:20:58.930 "method": "framework_set_scheduler", 00:20:58.930 "params": { 00:20:58.930 "name": "static" 00:20:58.930 } 00:20:58.930 } 00:20:58.930 ] 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "subsystem": "nvmf", 00:20:58.930 "config": [ 00:20:58.930 { 00:20:58.930 "method": "nvmf_set_config", 00:20:58.930 "params": { 00:20:58.930 "discovery_filter": "match_any", 00:20:58.930 "admin_cmd_passthru": { 00:20:58.930 "identify_ctrlr": false 00:20:58.930 }, 00:20:58.930 "dhchap_digests": [ 00:20:58.930 "sha256", 00:20:58.930 "sha384", 00:20:58.930 "sha512" 00:20:58.930 ], 00:20:58.930 "dhchap_dhgroups": [ 00:20:58.930 "null", 00:20:58.930 "ffdhe2048", 00:20:58.930 "ffdhe3072", 00:20:58.930 "ffdhe4096", 00:20:58.930 "ffdhe6144", 00:20:58.930 "ffdhe8192" 00:20:58.930 ] 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "nvmf_set_max_subsystems", 00:20:58.930 "params": { 00:20:58.930 "max_subsystems": 1024 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "nvmf_set_crdt", 00:20:58.930 "params": { 00:20:58.930 "crdt1": 0, 00:20:58.930 "crdt2": 0, 00:20:58.930 "crdt3": 0 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "nvmf_create_transport", 00:20:58.930 "params": { 00:20:58.930 "trtype": "TCP", 00:20:58.930 "max_queue_depth": 128, 00:20:58.930 "max_io_qpairs_per_ctrlr": 127, 00:20:58.930 "in_capsule_data_size": 4096, 00:20:58.930 "max_io_size": 131072, 00:20:58.930 "io_unit_size": 131072, 00:20:58.930 "max_aq_depth": 128, 00:20:58.930 "num_shared_buffers": 511, 00:20:58.930 "buf_cache_size": 4294967295, 00:20:58.930 "dif_insert_or_strip": false, 00:20:58.930 "zcopy": false, 00:20:58.930 "c2h_success": false, 00:20:58.930 "sock_priority": 0, 00:20:58.930 "abort_timeout_sec": 1, 00:20:58.930 "ack_timeout": 0, 00:20:58.930 "data_wr_pool_size": 0 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "nvmf_create_subsystem", 00:20:58.930 "params": { 00:20:58.930 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.930 "allow_any_host": false, 00:20:58.930 "serial_number": "00000000000000000000", 00:20:58.930 "model_number": "SPDK bdev Controller", 00:20:58.930 "max_namespaces": 32, 00:20:58.930 "min_cntlid": 1, 00:20:58.930 "max_cntlid": 65519, 00:20:58.930 "ana_reporting": false 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "nvmf_subsystem_add_host", 00:20:58.930 "params": { 00:20:58.930 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.930 "host": "nqn.2016-06.io.spdk:host1", 00:20:58.930 "psk": "key0" 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "nvmf_subsystem_add_ns", 00:20:58.930 "params": { 00:20:58.930 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.930 "namespace": { 00:20:58.930 "nsid": 1, 00:20:58.930 "bdev_name": "malloc0", 00:20:58.930 "nguid": "C5F7DCC761824C11A726DD85D11B9B31", 00:20:58.930 "uuid": "c5f7dcc7-6182-4c11-a726-dd85d11b9b31", 00:20:58.930 "no_auto_visible": false 00:20:58.930 } 00:20:58.930 } 00:20:58.930 }, 00:20:58.930 { 00:20:58.930 "method": "nvmf_subsystem_add_listener", 00:20:58.930 "params": { 00:20:58.930 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.930 "listen_address": { 00:20:58.930 "trtype": "TCP", 00:20:58.930 "adrfam": "IPv4", 00:20:58.930 "traddr": "10.0.0.2", 00:20:58.930 "trsvcid": "4420" 00:20:58.930 }, 00:20:58.930 "secure_channel": false, 00:20:58.930 "sock_impl": "ssl" 00:20:58.930 } 00:20:58.930 } 00:20:58.930 ] 00:20:58.930 } 00:20:58.930 ] 00:20:58.930 }' 00:20:58.930 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:59.190 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:59.190 "subsystems": [ 00:20:59.190 { 00:20:59.190 "subsystem": "keyring", 00:20:59.190 "config": [ 00:20:59.190 { 00:20:59.190 "method": "keyring_file_add_key", 00:20:59.190 "params": { 00:20:59.190 "name": "key0", 00:20:59.190 "path": "/tmp/tmp.UPgGCcDP1P" 00:20:59.190 } 00:20:59.190 } 00:20:59.190 ] 00:20:59.190 }, 00:20:59.190 { 00:20:59.190 "subsystem": "iobuf", 00:20:59.190 "config": [ 00:20:59.190 { 00:20:59.190 "method": "iobuf_set_options", 00:20:59.190 "params": { 00:20:59.190 "small_pool_count": 8192, 00:20:59.190 "large_pool_count": 1024, 00:20:59.190 "small_bufsize": 8192, 00:20:59.190 "large_bufsize": 135168, 00:20:59.190 "enable_numa": false 00:20:59.190 } 00:20:59.190 } 00:20:59.190 ] 00:20:59.190 }, 00:20:59.190 { 00:20:59.190 "subsystem": "sock", 00:20:59.190 "config": [ 00:20:59.190 { 00:20:59.190 "method": "sock_set_default_impl", 00:20:59.190 "params": { 00:20:59.190 "impl_name": "posix" 00:20:59.190 } 00:20:59.190 }, 00:20:59.190 { 00:20:59.190 "method": "sock_impl_set_options", 00:20:59.190 "params": { 00:20:59.190 "impl_name": "ssl", 00:20:59.190 "recv_buf_size": 4096, 00:20:59.190 "send_buf_size": 4096, 00:20:59.190 "enable_recv_pipe": true, 00:20:59.190 "enable_quickack": false, 00:20:59.190 "enable_placement_id": 0, 00:20:59.190 "enable_zerocopy_send_server": true, 00:20:59.190 "enable_zerocopy_send_client": false, 00:20:59.190 "zerocopy_threshold": 0, 00:20:59.190 "tls_version": 0, 00:20:59.190 "enable_ktls": false 00:20:59.190 } 00:20:59.190 }, 00:20:59.190 { 00:20:59.190 "method": "sock_impl_set_options", 00:20:59.190 "params": { 00:20:59.190 "impl_name": "posix", 00:20:59.190 "recv_buf_size": 2097152, 00:20:59.190 "send_buf_size": 2097152, 00:20:59.190 "enable_recv_pipe": true, 00:20:59.190 "enable_quickack": false, 00:20:59.190 "enable_placement_id": 0, 00:20:59.190 "enable_zerocopy_send_server": true, 00:20:59.190 "enable_zerocopy_send_client": false, 00:20:59.190 "zerocopy_threshold": 0, 00:20:59.190 "tls_version": 0, 00:20:59.190 "enable_ktls": false 00:20:59.190 } 00:20:59.190 } 00:20:59.190 ] 00:20:59.190 }, 00:20:59.190 { 00:20:59.190 "subsystem": "vmd", 00:20:59.190 "config": [] 00:20:59.190 }, 00:20:59.190 { 00:20:59.190 "subsystem": "accel", 00:20:59.190 "config": [ 00:20:59.190 { 00:20:59.190 "method": "accel_set_options", 00:20:59.190 "params": { 00:20:59.191 "small_cache_size": 128, 00:20:59.191 "large_cache_size": 16, 00:20:59.191 "task_count": 2048, 00:20:59.191 "sequence_count": 2048, 00:20:59.191 "buf_count": 2048 00:20:59.191 } 00:20:59.191 } 00:20:59.191 ] 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "subsystem": "bdev", 00:20:59.191 "config": [ 00:20:59.191 { 00:20:59.191 "method": "bdev_set_options", 00:20:59.191 "params": { 00:20:59.191 "bdev_io_pool_size": 65535, 00:20:59.191 "bdev_io_cache_size": 256, 00:20:59.191 "bdev_auto_examine": true, 00:20:59.191 "iobuf_small_cache_size": 128, 00:20:59.191 "iobuf_large_cache_size": 16 00:20:59.191 } 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "method": "bdev_raid_set_options", 00:20:59.191 "params": { 00:20:59.191 "process_window_size_kb": 1024, 00:20:59.191 "process_max_bandwidth_mb_sec": 0 00:20:59.191 } 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "method": "bdev_iscsi_set_options", 00:20:59.191 "params": { 00:20:59.191 "timeout_sec": 30 00:20:59.191 } 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "method": "bdev_nvme_set_options", 00:20:59.191 "params": { 00:20:59.191 "action_on_timeout": "none", 00:20:59.191 "timeout_us": 0, 00:20:59.191 "timeout_admin_us": 0, 00:20:59.191 "keep_alive_timeout_ms": 10000, 00:20:59.191 "arbitration_burst": 0, 00:20:59.191 "low_priority_weight": 0, 00:20:59.191 "medium_priority_weight": 0, 00:20:59.191 "high_priority_weight": 0, 00:20:59.191 "nvme_adminq_poll_period_us": 10000, 00:20:59.191 "nvme_ioq_poll_period_us": 0, 00:20:59.191 "io_queue_requests": 512, 00:20:59.191 "delay_cmd_submit": true, 00:20:59.191 "transport_retry_count": 4, 00:20:59.191 "bdev_retry_count": 3, 00:20:59.191 "transport_ack_timeout": 0, 00:20:59.191 "ctrlr_loss_timeout_sec": 0, 00:20:59.191 "reconnect_delay_sec": 0, 00:20:59.191 "fast_io_fail_timeout_sec": 0, 00:20:59.191 "disable_auto_failback": false, 00:20:59.191 "generate_uuids": false, 00:20:59.191 "transport_tos": 0, 00:20:59.191 "nvme_error_stat": false, 00:20:59.191 "rdma_srq_size": 0, 00:20:59.191 "io_path_stat": false, 00:20:59.191 "allow_accel_sequence": false, 00:20:59.191 "rdma_max_cq_size": 0, 00:20:59.191 "rdma_cm_event_timeout_ms": 0, 00:20:59.191 "dhchap_digests": [ 00:20:59.191 "sha256", 00:20:59.191 "sha384", 00:20:59.191 "sha512" 00:20:59.191 ], 00:20:59.191 "dhchap_dhgroups": [ 00:20:59.191 "null", 00:20:59.191 "ffdhe2048", 00:20:59.191 "ffdhe3072", 00:20:59.191 "ffdhe4096", 00:20:59.191 "ffdhe6144", 00:20:59.191 "ffdhe8192" 00:20:59.191 ] 00:20:59.191 } 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "method": "bdev_nvme_attach_controller", 00:20:59.191 "params": { 00:20:59.191 "name": "nvme0", 00:20:59.191 "trtype": "TCP", 00:20:59.191 "adrfam": "IPv4", 00:20:59.191 "traddr": "10.0.0.2", 00:20:59.191 "trsvcid": "4420", 00:20:59.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.191 "prchk_reftag": false, 00:20:59.191 "prchk_guard": false, 00:20:59.191 "ctrlr_loss_timeout_sec": 0, 00:20:59.191 "reconnect_delay_sec": 0, 00:20:59.191 "fast_io_fail_timeout_sec": 0, 00:20:59.191 "psk": "key0", 00:20:59.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.191 "hdgst": false, 00:20:59.191 "ddgst": false, 00:20:59.191 "multipath": "multipath" 00:20:59.191 } 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "method": "bdev_nvme_set_hotplug", 00:20:59.191 "params": { 00:20:59.191 "period_us": 100000, 00:20:59.191 "enable": false 00:20:59.191 } 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "method": "bdev_enable_histogram", 00:20:59.191 "params": { 00:20:59.191 "name": "nvme0n1", 00:20:59.191 "enable": true 00:20:59.191 } 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "method": "bdev_wait_for_examine" 00:20:59.191 } 00:20:59.191 ] 00:20:59.191 }, 00:20:59.191 { 00:20:59.191 "subsystem": "nbd", 00:20:59.191 "config": [] 00:20:59.191 } 00:20:59.191 ] 00:20:59.191 }' 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2012971 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2012971 ']' 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2012971 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012971 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012971' 00:20:59.191 killing process with pid 2012971 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2012971 00:20:59.191 Received shutdown signal, test time was about 1.000000 seconds 00:20:59.191 00:20:59.191 Latency(us) 00:20:59.191 [2024-11-19T17:20:00.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.191 [2024-11-19T17:20:00.662Z] =================================================================================================================== 00:20:59.191 [2024-11-19T17:20:00.662Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.191 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2012971 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2012805 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2012805 ']' 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2012805 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012805 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012805' 00:20:59.451 killing process with pid 2012805 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2012805 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2012805 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.451 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:59.451 "subsystems": [ 00:20:59.451 { 00:20:59.451 "subsystem": "keyring", 00:20:59.451 "config": [ 00:20:59.451 { 00:20:59.451 "method": "keyring_file_add_key", 00:20:59.451 "params": { 00:20:59.451 "name": "key0", 00:20:59.451 "path": "/tmp/tmp.UPgGCcDP1P" 00:20:59.451 } 00:20:59.451 } 00:20:59.451 ] 00:20:59.451 }, 00:20:59.451 { 00:20:59.451 "subsystem": "iobuf", 00:20:59.451 "config": [ 00:20:59.451 { 00:20:59.451 "method": "iobuf_set_options", 00:20:59.451 "params": { 00:20:59.451 "small_pool_count": 8192, 00:20:59.451 "large_pool_count": 1024, 00:20:59.451 "small_bufsize": 8192, 00:20:59.451 "large_bufsize": 135168, 00:20:59.451 "enable_numa": false 00:20:59.451 } 00:20:59.451 } 00:20:59.451 ] 00:20:59.451 }, 00:20:59.451 { 00:20:59.451 "subsystem": "sock", 00:20:59.451 "config": [ 00:20:59.451 { 00:20:59.451 "method": "sock_set_default_impl", 00:20:59.451 "params": { 00:20:59.451 "impl_name": "posix" 00:20:59.451 } 00:20:59.451 }, 00:20:59.451 { 00:20:59.451 "method": "sock_impl_set_options", 00:20:59.451 "params": { 00:20:59.451 "impl_name": "ssl", 00:20:59.451 "recv_buf_size": 4096, 00:20:59.451 "send_buf_size": 4096, 00:20:59.451 "enable_recv_pipe": true, 00:20:59.451 "enable_quickack": false, 00:20:59.451 "enable_placement_id": 0, 00:20:59.451 "enable_zerocopy_send_server": true, 00:20:59.451 "enable_zerocopy_send_client": false, 00:20:59.451 "zerocopy_threshold": 0, 00:20:59.451 "tls_version": 0, 00:20:59.451 "enable_ktls": false 00:20:59.451 } 00:20:59.451 }, 00:20:59.451 { 00:20:59.451 "method": "sock_impl_set_options", 00:20:59.451 "params": { 00:20:59.451 "impl_name": "posix", 00:20:59.451 "recv_buf_size": 2097152, 00:20:59.451 "send_buf_size": 2097152, 00:20:59.451 "enable_recv_pipe": true, 00:20:59.451 "enable_quickack": false, 00:20:59.451 "enable_placement_id": 0, 00:20:59.451 "enable_zerocopy_send_server": true, 00:20:59.451 "enable_zerocopy_send_client": false, 00:20:59.451 "zerocopy_threshold": 0, 00:20:59.451 "tls_version": 0, 00:20:59.451 "enable_ktls": false 00:20:59.451 } 00:20:59.451 } 00:20:59.451 ] 00:20:59.451 }, 00:20:59.451 { 00:20:59.451 "subsystem": "vmd", 00:20:59.451 "config": [] 00:20:59.451 }, 00:20:59.451 { 00:20:59.452 "subsystem": "accel", 00:20:59.452 "config": [ 00:20:59.452 { 00:20:59.452 "method": "accel_set_options", 00:20:59.452 "params": { 00:20:59.452 "small_cache_size": 128, 00:20:59.452 "large_cache_size": 16, 00:20:59.452 "task_count": 2048, 00:20:59.452 "sequence_count": 2048, 00:20:59.452 "buf_count": 2048 00:20:59.452 } 00:20:59.452 } 00:20:59.452 ] 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "subsystem": "bdev", 00:20:59.452 "config": [ 00:20:59.452 { 00:20:59.452 "method": "bdev_set_options", 00:20:59.452 "params": { 00:20:59.452 "bdev_io_pool_size": 65535, 00:20:59.452 "bdev_io_cache_size": 256, 00:20:59.452 "bdev_auto_examine": true, 00:20:59.452 "iobuf_small_cache_size": 128, 00:20:59.452 "iobuf_large_cache_size": 16 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "bdev_raid_set_options", 00:20:59.452 "params": { 00:20:59.452 "process_window_size_kb": 1024, 00:20:59.452 "process_max_bandwidth_mb_sec": 0 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "bdev_iscsi_set_options", 00:20:59.452 "params": { 00:20:59.452 "timeout_sec": 30 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "bdev_nvme_set_options", 00:20:59.452 "params": { 00:20:59.452 "action_on_timeout": "none", 00:20:59.452 "timeout_us": 0, 00:20:59.452 "timeout_admin_us": 0, 00:20:59.452 "keep_alive_timeout_ms": 10000, 00:20:59.452 "arbitration_burst": 0, 00:20:59.452 "low_priority_weight": 0, 00:20:59.452 "medium_priority_weight": 0, 00:20:59.452 "high_priority_weight": 0, 00:20:59.452 "nvme_adminq_poll_period_us": 10000, 00:20:59.452 "nvme_ioq_poll_period_us": 0, 00:20:59.452 "io_queue_requests": 0, 00:20:59.452 "delay_cmd_submit": true, 00:20:59.452 "transport_retry_count": 4, 00:20:59.452 "bdev_retry_count": 3, 00:20:59.452 "transport_ack_timeout": 0, 00:20:59.452 "ctrlr_loss_timeout_sec": 0, 00:20:59.452 "reconnect_delay_sec": 0, 00:20:59.452 "fast_io_fail_timeout_sec": 0, 00:20:59.452 "disable_auto_failback": false, 00:20:59.452 "generate_uuids": false, 00:20:59.452 "transport_tos": 0, 00:20:59.452 "nvme_error_stat": false, 00:20:59.452 "rdma_srq_size": 0, 00:20:59.452 "io_path_stat": false, 00:20:59.452 "allow_accel_sequence": false, 00:20:59.452 "rdma_max_cq_size": 0, 00:20:59.452 "rdma_cm_event_timeout_ms": 0, 00:20:59.452 "dhchap_digests": [ 00:20:59.452 "sha256", 00:20:59.452 "sha384", 00:20:59.452 "sha512" 00:20:59.452 ], 00:20:59.452 "dhchap_dhgroups": [ 00:20:59.452 "null", 00:20:59.452 "ffdhe2048", 00:20:59.452 "ffdhe3072", 00:20:59.452 "ffdhe4096", 00:20:59.452 "ffdhe6144", 00:20:59.452 "ffdhe8192" 00:20:59.452 ] 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "bdev_nvme_set_hotplug", 00:20:59.452 "params": { 00:20:59.452 "period_us": 100000, 00:20:59.452 "enable": false 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "bdev_malloc_create", 00:20:59.452 "params": { 00:20:59.452 "name": "malloc0", 00:20:59.452 "num_blocks": 8192, 00:20:59.452 "block_size": 4096, 00:20:59.452 "physical_block_size": 4096, 00:20:59.452 "uuid": "c5f7dcc7-6182-4c11-a726-dd85d11b9b31", 00:20:59.452 "optimal_io_boundary": 0, 00:20:59.452 "md_size": 0, 00:20:59.452 "dif_type": 0, 00:20:59.452 "dif_is_head_of_md": false, 00:20:59.452 "dif_pi_format": 0 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "bdev_wait_for_examine" 00:20:59.452 } 00:20:59.452 ] 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "subsystem": "nbd", 00:20:59.452 "config": [] 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "subsystem": "scheduler", 00:20:59.452 "config": [ 00:20:59.452 { 00:20:59.452 "method": "framework_set_scheduler", 00:20:59.452 "params": { 00:20:59.452 "name": "static" 00:20:59.452 } 00:20:59.452 } 00:20:59.452 ] 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "subsystem": "nvmf", 00:20:59.452 "config": [ 00:20:59.452 { 00:20:59.452 "method": "nvmf_set_config", 00:20:59.452 "params": { 00:20:59.452 "discovery_filter": "match_any", 00:20:59.452 "admin_cmd_passthru": { 00:20:59.452 "identify_ctrlr": false 00:20:59.452 }, 00:20:59.452 "dhchap_digests": [ 00:20:59.452 "sha256", 00:20:59.452 "sha384", 00:20:59.452 "sha512" 00:20:59.452 ], 00:20:59.452 "dhchap_dhgroups": [ 00:20:59.452 "null", 00:20:59.452 "ffdhe2048", 00:20:59.452 "ffdhe3072", 00:20:59.452 "ffdhe4096", 00:20:59.452 "ffdhe6144", 00:20:59.452 "ffdhe8192" 00:20:59.452 ] 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "nvmf_set_max_subsystems", 00:20:59.452 "params": { 00:20:59.452 "max_subsystems": 1024 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "nvmf_set_crdt", 00:20:59.452 "params": { 00:20:59.452 "crdt1": 0, 00:20:59.452 "crdt2": 0, 00:20:59.452 "crdt3": 0 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "nvmf_create_transport", 00:20:59.452 "params": { 00:20:59.452 "trtype": "TCP", 00:20:59.452 "max_queue_depth": 128, 00:20:59.452 "max_io_qpairs_per_ctrlr": 127, 00:20:59.452 "in_capsule_data_size": 4096, 00:20:59.452 "max_io_size": 131072, 00:20:59.452 "io_unit_size": 131072, 00:20:59.452 "max_aq_depth": 128, 00:20:59.452 "num_shared_buffers": 511, 00:20:59.452 "buf_cache_size": 4294967295, 00:20:59.452 "dif_insert_or_strip": false, 00:20:59.452 "zcopy": false, 00:20:59.452 "c2h_success": false, 00:20:59.452 "sock_priority": 0, 00:20:59.452 "abort_timeout_sec": 1, 00:20:59.452 "ack_timeout": 0, 00:20:59.452 "data_wr_pool_size": 0 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "nvmf_create_subsystem", 00:20:59.452 "params": { 00:20:59.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.452 "allow_any_host": false, 00:20:59.452 "serial_number": "00000000000000000000", 00:20:59.452 "model_number": "SPDK bdev Controller", 00:20:59.452 "max_namespaces": 32, 00:20:59.452 "min_cntlid": 1, 00:20:59.452 "max_cntlid": 65519, 00:20:59.452 "ana_reporting": false 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "nvmf_subsystem_add_host", 00:20:59.452 "params": { 00:20:59.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.452 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.452 "psk": "key0" 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "nvmf_subsystem_add_ns", 00:20:59.452 "params": { 00:20:59.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.452 "namespace": { 00:20:59.452 "nsid": 1, 00:20:59.452 "bdev_name": "malloc0", 00:20:59.452 "nguid": "C5F7DCC761824C11A726DD85D11B9B31", 00:20:59.452 "uuid": "c5f7dcc7-6182-4c11-a726-dd85d11b9b31", 00:20:59.452 "no_auto_visible": false 00:20:59.452 } 00:20:59.452 } 00:20:59.452 }, 00:20:59.452 { 00:20:59.452 "method": "nvmf_subsystem_add_listener", 00:20:59.452 "params": { 00:20:59.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.452 "listen_address": { 00:20:59.452 "trtype": "TCP", 00:20:59.452 "adrfam": "IPv4", 00:20:59.452 "traddr": "10.0.0.2", 00:20:59.452 "trsvcid": "4420" 00:20:59.452 }, 00:20:59.452 "secure_channel": false, 00:20:59.452 "sock_impl": "ssl" 00:20:59.452 } 00:20:59.452 } 00:20:59.452 ] 00:20:59.452 } 00:20:59.452 ] 00:20:59.452 }' 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2013657 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2013657 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2013657 ']' 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.452 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.713 [2024-11-19 18:20:00.945806] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:20:59.713 [2024-11-19 18:20:00.945867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.713 [2024-11-19 18:20:01.035254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.713 [2024-11-19 18:20:01.064957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.713 [2024-11-19 18:20:01.064984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.713 [2024-11-19 18:20:01.064990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.713 [2024-11-19 18:20:01.064995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.713 [2024-11-19 18:20:01.064999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.713 [2024-11-19 18:20:01.065477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.972 [2024-11-19 18:20:01.258386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.972 [2024-11-19 18:20:01.290419] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.972 [2024-11-19 18:20:01.290629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.540 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.540 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:00.540 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.540 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.540 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.540 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.540 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2013741 00:21:00.540 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2013741 /var/tmp/bdevperf.sock 00:21:00.540 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2013741 ']' 00:21:00.541 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.541 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.541 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.541 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:00.541 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.541 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.541 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:00.541 "subsystems": [ 00:21:00.541 { 00:21:00.541 "subsystem": "keyring", 00:21:00.541 "config": [ 00:21:00.541 { 00:21:00.541 "method": "keyring_file_add_key", 00:21:00.541 "params": { 00:21:00.541 "name": "key0", 00:21:00.541 "path": "/tmp/tmp.UPgGCcDP1P" 00:21:00.541 } 00:21:00.541 } 00:21:00.541 ] 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "subsystem": "iobuf", 00:21:00.541 "config": [ 00:21:00.541 { 00:21:00.541 "method": "iobuf_set_options", 00:21:00.541 "params": { 00:21:00.541 "small_pool_count": 8192, 00:21:00.541 "large_pool_count": 1024, 00:21:00.541 "small_bufsize": 8192, 00:21:00.541 "large_bufsize": 135168, 00:21:00.541 "enable_numa": false 00:21:00.541 } 00:21:00.541 } 00:21:00.541 ] 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "subsystem": "sock", 00:21:00.541 "config": [ 00:21:00.541 { 00:21:00.541 "method": "sock_set_default_impl", 00:21:00.541 "params": { 00:21:00.541 "impl_name": "posix" 00:21:00.541 } 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "method": "sock_impl_set_options", 00:21:00.541 "params": { 00:21:00.541 "impl_name": "ssl", 00:21:00.541 "recv_buf_size": 4096, 00:21:00.541 "send_buf_size": 4096, 00:21:00.541 "enable_recv_pipe": true, 00:21:00.541 "enable_quickack": false, 00:21:00.541 "enable_placement_id": 0, 00:21:00.541 "enable_zerocopy_send_server": true, 00:21:00.541 "enable_zerocopy_send_client": false, 00:21:00.541 "zerocopy_threshold": 0, 00:21:00.541 "tls_version": 0, 00:21:00.541 "enable_ktls": false 00:21:00.541 } 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "method": "sock_impl_set_options", 00:21:00.541 "params": { 00:21:00.541 "impl_name": "posix", 00:21:00.541 "recv_buf_size": 2097152, 00:21:00.541 "send_buf_size": 2097152, 00:21:00.541 "enable_recv_pipe": true, 00:21:00.541 "enable_quickack": false, 00:21:00.541 "enable_placement_id": 0, 00:21:00.541 "enable_zerocopy_send_server": true, 00:21:00.541 "enable_zerocopy_send_client": false, 00:21:00.541 "zerocopy_threshold": 0, 00:21:00.541 "tls_version": 0, 00:21:00.541 "enable_ktls": false 00:21:00.541 } 00:21:00.541 } 00:21:00.541 ] 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "subsystem": "vmd", 00:21:00.541 "config": [] 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "subsystem": "accel", 00:21:00.541 "config": [ 00:21:00.541 { 00:21:00.541 "method": "accel_set_options", 00:21:00.541 "params": { 00:21:00.541 "small_cache_size": 128, 00:21:00.541 "large_cache_size": 16, 00:21:00.541 "task_count": 2048, 00:21:00.541 "sequence_count": 2048, 00:21:00.541 "buf_count": 2048 00:21:00.541 } 00:21:00.541 } 00:21:00.541 ] 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "subsystem": "bdev", 00:21:00.541 "config": [ 00:21:00.541 { 00:21:00.541 "method": "bdev_set_options", 00:21:00.541 "params": { 00:21:00.541 "bdev_io_pool_size": 65535, 00:21:00.541 "bdev_io_cache_size": 256, 00:21:00.541 "bdev_auto_examine": true, 00:21:00.541 "iobuf_small_cache_size": 128, 00:21:00.541 "iobuf_large_cache_size": 16 00:21:00.541 } 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "method": "bdev_raid_set_options", 00:21:00.541 "params": { 00:21:00.541 "process_window_size_kb": 1024, 00:21:00.541 "process_max_bandwidth_mb_sec": 0 00:21:00.541 } 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "method": "bdev_iscsi_set_options", 00:21:00.541 "params": { 00:21:00.541 "timeout_sec": 30 00:21:00.541 } 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "method": "bdev_nvme_set_options", 00:21:00.541 "params": { 00:21:00.541 "action_on_timeout": "none", 00:21:00.541 "timeout_us": 0, 00:21:00.541 "timeout_admin_us": 0, 00:21:00.541 "keep_alive_timeout_ms": 10000, 00:21:00.541 "arbitration_burst": 0, 00:21:00.541 "low_priority_weight": 0, 00:21:00.541 "medium_priority_weight": 0, 00:21:00.541 "high_priority_weight": 0, 00:21:00.541 "nvme_adminq_poll_period_us": 10000, 00:21:00.541 "nvme_ioq_poll_period_us": 0, 00:21:00.541 "io_queue_requests": 512, 00:21:00.541 "delay_cmd_submit": true, 00:21:00.541 "transport_retry_count": 4, 00:21:00.541 "bdev_retry_count": 3, 00:21:00.541 "transport_ack_timeout": 0, 00:21:00.541 "ctrlr_loss_timeout_sec": 0, 00:21:00.541 "reconnect_delay_sec": 0, 00:21:00.541 "fast_io_fail_timeout_sec": 0, 00:21:00.541 "disable_auto_failback": false, 00:21:00.541 "generate_uuids": false, 00:21:00.541 "transport_tos": 0, 00:21:00.541 "nvme_error_stat": false, 00:21:00.541 "rdma_srq_size": 0, 00:21:00.541 "io_path_stat": false, 00:21:00.541 "allow_accel_sequence": false, 00:21:00.541 "rdma_max_cq_size": 0, 00:21:00.541 "rdma_cm_event_timeout_ms": 0, 00:21:00.541 "dhchap_digests": [ 00:21:00.541 "sha256", 00:21:00.541 "sha384", 00:21:00.541 "sha512" 00:21:00.541 ], 00:21:00.541 "dhchap_dhgroups": [ 00:21:00.541 "null", 00:21:00.541 "ffdhe2048", 00:21:00.541 "ffdhe3072", 00:21:00.541 "ffdhe4096", 00:21:00.541 "ffdhe6144", 00:21:00.541 "ffdhe8192" 00:21:00.541 ] 00:21:00.541 } 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "method": "bdev_nvme_attach_controller", 00:21:00.541 "params": { 00:21:00.541 "name": "nvme0", 00:21:00.541 "trtype": "TCP", 00:21:00.541 "adrfam": "IPv4", 00:21:00.541 "traddr": "10.0.0.2", 00:21:00.541 "trsvcid": "4420", 00:21:00.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.541 "prchk_reftag": false, 00:21:00.541 "prchk_guard": false, 00:21:00.541 "ctrlr_loss_timeout_sec": 0, 00:21:00.541 "reconnect_delay_sec": 0, 00:21:00.541 "fast_io_fail_timeout_sec": 0, 00:21:00.541 "psk": "key0", 00:21:00.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.541 "hdgst": false, 00:21:00.541 "ddgst": false, 00:21:00.541 "multipath": "multipath" 00:21:00.541 } 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "method": "bdev_nvme_set_hotplug", 00:21:00.541 "params": { 00:21:00.541 "period_us": 100000, 00:21:00.541 "enable": false 00:21:00.541 } 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "method": "bdev_enable_histogram", 00:21:00.541 "params": { 00:21:00.541 "name": "nvme0n1", 00:21:00.541 "enable": true 00:21:00.541 } 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "method": "bdev_wait_for_examine" 00:21:00.541 } 00:21:00.541 ] 00:21:00.541 }, 00:21:00.541 { 00:21:00.541 "subsystem": "nbd", 00:21:00.541 "config": [] 00:21:00.541 } 00:21:00.541 ] 00:21:00.541 }' 00:21:00.541 [2024-11-19 18:20:01.814648] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:21:00.541 [2024-11-19 18:20:01.814703] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013741 ] 00:21:00.541 [2024-11-19 18:20:01.897735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.541 [2024-11-19 18:20:01.927559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.801 [2024-11-19 18:20:02.062503] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.371 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.371 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:01.371 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:01.371 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:01.371 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.371 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.631 Running I/O for 1 seconds... 00:21:02.573 5886.00 IOPS, 22.99 MiB/s 00:21:02.573 Latency(us) 00:21:02.573 [2024-11-19T17:20:04.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.573 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:02.573 Verification LBA range: start 0x0 length 0x2000 00:21:02.573 nvme0n1 : 1.01 5943.75 23.22 0.00 0.00 21402.55 4696.75 22937.60 00:21:02.573 [2024-11-19T17:20:04.044Z] =================================================================================================================== 00:21:02.573 [2024-11-19T17:20:04.044Z] Total : 5943.75 23.22 0.00 0.00 21402.55 4696.75 22937.60 00:21:02.573 { 00:21:02.573 "results": [ 00:21:02.573 { 00:21:02.573 "job": "nvme0n1", 00:21:02.573 "core_mask": "0x2", 00:21:02.573 "workload": "verify", 00:21:02.573 "status": "finished", 00:21:02.573 "verify_range": { 00:21:02.573 "start": 0, 00:21:02.573 "length": 8192 00:21:02.573 }, 00:21:02.573 "queue_depth": 128, 00:21:02.573 "io_size": 4096, 00:21:02.573 "runtime": 1.011987, 00:21:02.573 "iops": 5943.752241876625, 00:21:02.573 "mibps": 23.217782194830566, 00:21:02.573 "io_failed": 0, 00:21:02.573 "io_timeout": 0, 00:21:02.573 "avg_latency_us": 21402.55221501801, 00:21:02.573 "min_latency_us": 4696.746666666667, 00:21:02.573 "max_latency_us": 22937.6 00:21:02.573 } 00:21:02.573 ], 00:21:02.573 "core_count": 1 00:21:02.573 } 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:02.573 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:02.573 nvmf_trace.0 00:21:02.573 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:02.573 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2013741 00:21:02.573 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2013741 ']' 00:21:02.573 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2013741 00:21:02.573 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.573 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.573 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2013741 00:21:02.834 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2013741' 00:21:02.835 killing process with pid 2013741 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2013741 00:21:02.835 Received shutdown signal, test time was about 1.000000 seconds 00:21:02.835 00:21:02.835 Latency(us) 00:21:02.835 [2024-11-19T17:20:04.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.835 [2024-11-19T17:20:04.306Z] =================================================================================================================== 00:21:02.835 [2024-11-19T17:20:04.306Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2013741 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.835 rmmod nvme_tcp 00:21:02.835 rmmod nvme_fabrics 00:21:02.835 rmmod nvme_keyring 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2013657 ']' 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2013657 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2013657 ']' 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2013657 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.835 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2013657 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2013657' 00:21:03.095 killing process with pid 2013657 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2013657 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2013657 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.095 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kBWysoGs96 /tmp/tmp.Jg3Fvl0aV6 /tmp/tmp.UPgGCcDP1P 00:21:05.639 00:21:05.639 real 1m27.946s 00:21:05.639 user 2m19.441s 00:21:05.639 sys 0m26.819s 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.639 ************************************ 00:21:05.639 END TEST nvmf_tls 00:21:05.639 ************************************ 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.639 ************************************ 00:21:05.639 START TEST nvmf_fips 00:21:05.639 ************************************ 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:05.639 * Looking for test storage... 00:21:05.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:05.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.639 --rc genhtml_branch_coverage=1 00:21:05.639 --rc genhtml_function_coverage=1 00:21:05.639 --rc genhtml_legend=1 00:21:05.639 --rc geninfo_all_blocks=1 00:21:05.639 --rc geninfo_unexecuted_blocks=1 00:21:05.639 00:21:05.639 ' 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:05.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.639 --rc genhtml_branch_coverage=1 00:21:05.639 --rc genhtml_function_coverage=1 00:21:05.639 --rc genhtml_legend=1 00:21:05.639 --rc geninfo_all_blocks=1 00:21:05.639 --rc geninfo_unexecuted_blocks=1 00:21:05.639 00:21:05.639 ' 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:05.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.639 --rc genhtml_branch_coverage=1 00:21:05.639 --rc genhtml_function_coverage=1 00:21:05.639 --rc genhtml_legend=1 00:21:05.639 --rc geninfo_all_blocks=1 00:21:05.639 --rc geninfo_unexecuted_blocks=1 00:21:05.639 00:21:05.639 ' 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:05.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.639 --rc genhtml_branch_coverage=1 00:21:05.639 --rc genhtml_function_coverage=1 00:21:05.639 --rc genhtml_legend=1 00:21:05.639 --rc geninfo_all_blocks=1 00:21:05.639 --rc geninfo_unexecuted_blocks=1 00:21:05.639 00:21:05.639 ' 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.639 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:05.640 Error setting digest 00:21:05.640 40F2E82B727F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:05.640 40F2E82B727F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.640 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.641 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.641 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.641 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.641 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.641 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.641 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.641 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.641 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.641 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:13.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:13.971 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:13.972 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:13.972 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:13.972 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:21:13.972 00:21:13.972 --- 10.0.0.2 ping statistics --- 00:21:13.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.972 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:21:13.972 00:21:13.972 --- 10.0.0.1 ping statistics --- 00:21:13.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.972 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2018543 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2018543 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2018543 ']' 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.972 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.973 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.973 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:13.973 [2024-11-19 18:20:14.600475] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:21:13.973 [2024-11-19 18:20:14.600547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.973 [2024-11-19 18:20:14.700277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.973 [2024-11-19 18:20:14.751058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.973 [2024-11-19 18:20:14.751112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.973 [2024-11-19 18:20:14.751120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.973 [2024-11-19 18:20:14.751127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.973 [2024-11-19 18:20:14.751134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.973 [2024-11-19 18:20:14.751912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:13.973 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Eho 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Eho 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Eho 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Eho 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:14.234 [2024-11-19 18:20:15.606191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.234 [2024-11-19 18:20:15.622187] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.234 [2024-11-19 18:20:15.622483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.234 malloc0 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2018760 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2018760 /var/tmp/bdevperf.sock 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2018760 ']' 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.234 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:14.495 [2024-11-19 18:20:15.773641] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:21:14.495 [2024-11-19 18:20:15.773716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018760 ] 00:21:14.495 [2024-11-19 18:20:15.864358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.495 [2024-11-19 18:20:15.915190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.438 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.438 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:15.438 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Eho 00:21:15.438 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:15.438 [2024-11-19 18:20:16.888623] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.699 TLSTESTn1 00:21:15.699 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:15.699 Running I/O for 10 seconds... 00:21:18.022 4726.00 IOPS, 18.46 MiB/s [2024-11-19T17:20:20.435Z] 5168.50 IOPS, 20.19 MiB/s [2024-11-19T17:20:21.377Z] 5507.00 IOPS, 21.51 MiB/s [2024-11-19T17:20:22.319Z] 5554.50 IOPS, 21.70 MiB/s [2024-11-19T17:20:23.259Z] 5591.40 IOPS, 21.84 MiB/s [2024-11-19T17:20:24.199Z] 5436.17 IOPS, 21.24 MiB/s [2024-11-19T17:20:25.138Z] 5482.43 IOPS, 21.42 MiB/s [2024-11-19T17:20:26.520Z] 5563.00 IOPS, 21.73 MiB/s [2024-11-19T17:20:27.461Z] 5596.78 IOPS, 21.86 MiB/s [2024-11-19T17:20:27.461Z] 5637.30 IOPS, 22.02 MiB/s 00:21:25.990 Latency(us) 00:21:25.990 [2024-11-19T17:20:27.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.990 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:25.990 Verification LBA range: start 0x0 length 0x2000 00:21:25.990 TLSTESTn1 : 10.01 5642.58 22.04 0.00 0.00 22652.28 5488.64 60730.03 00:21:25.990 [2024-11-19T17:20:27.461Z] =================================================================================================================== 00:21:25.990 [2024-11-19T17:20:27.461Z] Total : 5642.58 22.04 0.00 0.00 22652.28 5488.64 60730.03 00:21:25.990 { 00:21:25.990 "results": [ 00:21:25.990 { 00:21:25.990 "job": "TLSTESTn1", 00:21:25.990 "core_mask": "0x4", 00:21:25.990 "workload": "verify", 00:21:25.990 "status": "finished", 00:21:25.990 "verify_range": { 00:21:25.990 "start": 0, 00:21:25.990 "length": 8192 00:21:25.990 }, 00:21:25.990 "queue_depth": 128, 00:21:25.990 "io_size": 4096, 00:21:25.990 "runtime": 10.01332, 00:21:25.990 "iops": 5642.5840780080935, 00:21:25.990 "mibps": 22.041344054719115, 00:21:25.990 "io_failed": 0, 00:21:25.990 "io_timeout": 0, 00:21:25.990 "avg_latency_us": 22652.281314666998, 00:21:25.990 "min_latency_us": 5488.64, 00:21:25.990 "max_latency_us": 60730.026666666665 00:21:25.990 } 00:21:25.990 ], 00:21:25.990 "core_count": 1 00:21:25.990 } 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:25.990 nvmf_trace.0 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2018760 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2018760 ']' 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2018760 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018760 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018760' 00:21:25.990 killing process with pid 2018760 00:21:25.990 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2018760 00:21:25.991 Received shutdown signal, test time was about 10.000000 seconds 00:21:25.991 00:21:25.991 Latency(us) 00:21:25.991 [2024-11-19T17:20:27.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.991 [2024-11-19T17:20:27.462Z] =================================================================================================================== 00:21:25.991 [2024-11-19T17:20:27.462Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.991 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2018760 00:21:25.991 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:25.991 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.991 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:25.991 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.991 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:25.991 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.991 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.991 rmmod nvme_tcp 00:21:25.991 rmmod nvme_fabrics 00:21:26.250 rmmod nvme_keyring 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2018543 ']' 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2018543 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2018543 ']' 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2018543 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018543 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018543' 00:21:26.250 killing process with pid 2018543 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2018543 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2018543 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.250 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Eho 00:21:28.794 00:21:28.794 real 0m23.163s 00:21:28.794 user 0m24.280s 00:21:28.794 sys 0m10.155s 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.794 ************************************ 00:21:28.794 END TEST nvmf_fips 00:21:28.794 ************************************ 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:28.794 ************************************ 00:21:28.794 START TEST nvmf_control_msg_list 00:21:28.794 ************************************ 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:28.794 * Looking for test storage... 00:21:28.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:28.794 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:28.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.794 --rc genhtml_branch_coverage=1 00:21:28.794 --rc genhtml_function_coverage=1 00:21:28.794 --rc genhtml_legend=1 00:21:28.794 --rc geninfo_all_blocks=1 00:21:28.794 --rc geninfo_unexecuted_blocks=1 00:21:28.794 00:21:28.794 ' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:28.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.794 --rc genhtml_branch_coverage=1 00:21:28.794 --rc genhtml_function_coverage=1 00:21:28.794 --rc genhtml_legend=1 00:21:28.794 --rc geninfo_all_blocks=1 00:21:28.794 --rc geninfo_unexecuted_blocks=1 00:21:28.794 00:21:28.794 ' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:28.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.794 --rc genhtml_branch_coverage=1 00:21:28.794 --rc genhtml_function_coverage=1 00:21:28.794 --rc genhtml_legend=1 00:21:28.794 --rc geninfo_all_blocks=1 00:21:28.794 --rc geninfo_unexecuted_blocks=1 00:21:28.794 00:21:28.794 ' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:28.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.794 --rc genhtml_branch_coverage=1 00:21:28.794 --rc genhtml_function_coverage=1 00:21:28.794 --rc genhtml_legend=1 00:21:28.794 --rc geninfo_all_blocks=1 00:21:28.794 --rc geninfo_unexecuted_blocks=1 00:21:28.794 00:21:28.794 ' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.794 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:28.795 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:28.795 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:28.795 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.795 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.795 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.795 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:28.795 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:28.795 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.795 18:20:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.928 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:36.929 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:36.929 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:36.929 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:36.929 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:36.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:21:36.929 00:21:36.929 --- 10.0.0.2 ping statistics --- 00:21:36.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.929 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:21:36.929 00:21:36.929 --- 10.0.0.1 ping statistics --- 00:21:36.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.929 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2025200 00:21:36.929 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2025200 00:21:36.930 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:36.930 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2025200 ']' 00:21:36.930 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.930 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.930 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.930 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.930 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.930 [2024-11-19 18:20:37.568543] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:21:36.930 [2024-11-19 18:20:37.568608] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.930 [2024-11-19 18:20:37.666453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.930 [2024-11-19 18:20:37.717723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.930 [2024-11-19 18:20:37.717773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.930 [2024-11-19 18:20:37.717781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.930 [2024-11-19 18:20:37.717789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.930 [2024-11-19 18:20:37.717795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.930 [2024-11-19 18:20:37.718568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.930 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.930 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:36.930 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.930 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.930 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:37.190 [2024-11-19 18:20:38.428784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:37.190 Malloc0 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:37.190 [2024-11-19 18:20:38.483146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2025455 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2025456 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2025457 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2025455 00:21:37.190 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:37.190 [2024-11-19 18:20:38.594174] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:37.190 [2024-11-19 18:20:38.594554] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:37.190 [2024-11-19 18:20:38.594892] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:38.573 Initializing NVMe Controllers 00:21:38.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:38.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:38.574 Initialization complete. Launching workers. 00:21:38.574 ======================================================== 00:21:38.574 Latency(us) 00:21:38.574 Device Information : IOPS MiB/s Average min max 00:21:38.574 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40912.18 40814.07 41161.31 00:21:38.574 ======================================================== 00:21:38.574 Total : 25.00 0.10 40912.18 40814.07 41161.31 00:21:38.574 00:21:38.574 Initializing NVMe Controllers 00:21:38.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:38.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:38.574 Initialization complete. Launching workers. 00:21:38.574 ======================================================== 00:21:38.574 Latency(us) 00:21:38.574 Device Information : IOPS MiB/s Average min max 00:21:38.574 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40892.92 40627.81 41104.55 00:21:38.574 ======================================================== 00:21:38.574 Total : 25.00 0.10 40892.92 40627.81 41104.55 00:21:38.574 00:21:38.574 Initializing NVMe Controllers 00:21:38.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:38.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:38.574 Initialization complete. Launching workers. 00:21:38.574 ======================================================== 00:21:38.574 Latency(us) 00:21:38.574 Device Information : IOPS MiB/s Average min max 00:21:38.574 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40912.33 40854.07 41088.80 00:21:38.574 ======================================================== 00:21:38.574 Total : 25.00 0.10 40912.33 40854.07 41088.80 00:21:38.574 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2025456 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2025457 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.574 rmmod nvme_tcp 00:21:38.574 rmmod nvme_fabrics 00:21:38.574 rmmod nvme_keyring 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2025200 ']' 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2025200 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2025200 ']' 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2025200 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2025200 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2025200' 00:21:38.574 killing process with pid 2025200 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2025200 00:21:38.574 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2025200 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.834 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.748 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.748 00:21:40.748 real 0m12.355s 00:21:40.748 user 0m8.099s 00:21:40.748 sys 0m6.387s 00:21:40.748 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.748 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 ************************************ 00:21:40.748 END TEST nvmf_control_msg_list 00:21:40.748 ************************************ 00:21:41.009 18:20:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:41.009 18:20:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:41.009 18:20:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.009 18:20:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:41.009 ************************************ 00:21:41.009 START TEST nvmf_wait_for_buf 00:21:41.009 ************************************ 00:21:41.009 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:41.009 * Looking for test storage... 00:21:41.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:41.009 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:41.009 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:41.009 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.010 --rc genhtml_branch_coverage=1 00:21:41.010 --rc genhtml_function_coverage=1 00:21:41.010 --rc genhtml_legend=1 00:21:41.010 --rc geninfo_all_blocks=1 00:21:41.010 --rc geninfo_unexecuted_blocks=1 00:21:41.010 00:21:41.010 ' 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.010 --rc genhtml_branch_coverage=1 00:21:41.010 --rc genhtml_function_coverage=1 00:21:41.010 --rc genhtml_legend=1 00:21:41.010 --rc geninfo_all_blocks=1 00:21:41.010 --rc geninfo_unexecuted_blocks=1 00:21:41.010 00:21:41.010 ' 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.010 --rc genhtml_branch_coverage=1 00:21:41.010 --rc genhtml_function_coverage=1 00:21:41.010 --rc genhtml_legend=1 00:21:41.010 --rc geninfo_all_blocks=1 00:21:41.010 --rc geninfo_unexecuted_blocks=1 00:21:41.010 00:21:41.010 ' 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.010 --rc genhtml_branch_coverage=1 00:21:41.010 --rc genhtml_function_coverage=1 00:21:41.010 --rc genhtml_legend=1 00:21:41.010 --rc geninfo_all_blocks=1 00:21:41.010 --rc geninfo_unexecuted_blocks=1 00:21:41.010 00:21:41.010 ' 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.010 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.271 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:41.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.272 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.424 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:49.425 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:49.425 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:49.425 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:49.425 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:21:49.425 00:21:49.425 --- 10.0.0.2 ping statistics --- 00:21:49.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.425 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:21:49.425 00:21:49.425 --- 10.0.0.1 ping statistics --- 00:21:49.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.425 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.425 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2029817 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2029817 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2029817 ']' 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.425 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.426 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.426 [2024-11-19 18:20:50.098464] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:21:49.426 [2024-11-19 18:20:50.098535] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.426 [2024-11-19 18:20:50.210755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.426 [2024-11-19 18:20:50.262679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.426 [2024-11-19 18:20:50.262736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.426 [2024-11-19 18:20:50.262745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.426 [2024-11-19 18:20:50.262753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.426 [2024-11-19 18:20:50.262759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.426 [2024-11-19 18:20:50.263563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.687 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.687 Malloc0 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.687 [2024-11-19 18:20:51.091650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:49.687 [2024-11-19 18:20:51.127945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.687 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:49.948 [2024-11-19 18:20:51.230000] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:51.334 Initializing NVMe Controllers 00:21:51.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:51.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:51.334 Initialization complete. Launching workers. 00:21:51.334 ======================================================== 00:21:51.334 Latency(us) 00:21:51.334 Device Information : IOPS MiB/s Average min max 00:21:51.334 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32263.89 8010.81 63856.35 00:21:51.335 ======================================================== 00:21:51.335 Total : 129.00 16.12 32263.89 8010.81 63856.35 00:21:51.335 00:21:51.335 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:51.335 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:51.335 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.335 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:51.335 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.595 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:51.595 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:51.595 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:51.595 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:51.595 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:51.595 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:51.595 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.595 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.596 rmmod nvme_tcp 00:21:51.596 rmmod nvme_fabrics 00:21:51.596 rmmod nvme_keyring 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2029817 ']' 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2029817 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2029817 ']' 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2029817 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2029817 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2029817' 00:21:51.596 killing process with pid 2029817 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2029817 00:21:51.596 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2029817 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.856 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.857 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.857 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.770 00:21:53.770 real 0m12.934s 00:21:53.770 user 0m5.260s 00:21:53.770 sys 0m6.274s 00:21:53.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:53.770 ************************************ 00:21:53.770 END TEST nvmf_wait_for_buf 00:21:53.770 ************************************ 00:21:54.030 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:54.030 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:54.030 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:54.030 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:54.030 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.030 18:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:02.170 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:02.170 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:02.170 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:02.170 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:02.170 ************************************ 00:22:02.170 START TEST nvmf_perf_adq 00:22:02.170 ************************************ 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:02.170 * Looking for test storage... 00:22:02.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.170 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.171 --rc genhtml_branch_coverage=1 00:22:02.171 --rc genhtml_function_coverage=1 00:22:02.171 --rc genhtml_legend=1 00:22:02.171 --rc geninfo_all_blocks=1 00:22:02.171 --rc geninfo_unexecuted_blocks=1 00:22:02.171 00:22:02.171 ' 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.171 --rc genhtml_branch_coverage=1 00:22:02.171 --rc genhtml_function_coverage=1 00:22:02.171 --rc genhtml_legend=1 00:22:02.171 --rc geninfo_all_blocks=1 00:22:02.171 --rc geninfo_unexecuted_blocks=1 00:22:02.171 00:22:02.171 ' 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.171 --rc genhtml_branch_coverage=1 00:22:02.171 --rc genhtml_function_coverage=1 00:22:02.171 --rc genhtml_legend=1 00:22:02.171 --rc geninfo_all_blocks=1 00:22:02.171 --rc geninfo_unexecuted_blocks=1 00:22:02.171 00:22:02.171 ' 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.171 --rc genhtml_branch_coverage=1 00:22:02.171 --rc genhtml_function_coverage=1 00:22:02.171 --rc genhtml_legend=1 00:22:02.171 --rc geninfo_all_blocks=1 00:22:02.171 --rc geninfo_unexecuted_blocks=1 00:22:02.171 00:22:02.171 ' 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.171 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:08.756 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:08.756 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:08.756 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.756 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:08.756 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:08.757 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.757 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.757 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.757 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:08.757 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:08.757 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:08.757 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:08.757 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:10.139 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:12.682 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:17.968 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:17.968 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.968 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.968 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.968 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.968 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.968 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.968 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:17.969 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:17.969 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:17.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:17.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.969 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:22:17.970 00:22:17.970 --- 10.0.0.2 ping statistics --- 00:22:17.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.970 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:22:17.970 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:22:17.970 00:22:17.970 --- 10.0.0.1 ping statistics --- 00:22:17.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.970 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2040622 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2040622 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2040622 ']' 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.970 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.970 [2024-11-19 18:21:19.119366] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:22:17.970 [2024-11-19 18:21:19.119433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.970 [2024-11-19 18:21:19.220217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.970 [2024-11-19 18:21:19.275429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.970 [2024-11-19 18:21:19.275488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.970 [2024-11-19 18:21:19.275497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.970 [2024-11-19 18:21:19.275504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.970 [2024-11-19 18:21:19.275510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.970 [2024-11-19 18:21:19.277579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.970 [2024-11-19 18:21:19.277740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.970 [2024-11-19 18:21:19.277900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.970 [2024-11-19 18:21:19.277901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.542 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.803 [2024-11-19 18:21:20.153855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.803 Malloc1 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.803 [2024-11-19 18:21:20.230221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2040955 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:18.803 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:21.362 "tick_rate": 2400000000, 00:22:21.362 "poll_groups": [ 00:22:21.362 { 00:22:21.362 "name": "nvmf_tgt_poll_group_000", 00:22:21.362 "admin_qpairs": 1, 00:22:21.362 "io_qpairs": 1, 00:22:21.362 "current_admin_qpairs": 1, 00:22:21.362 "current_io_qpairs": 1, 00:22:21.362 "pending_bdev_io": 0, 00:22:21.362 "completed_nvme_io": 16510, 00:22:21.362 "transports": [ 00:22:21.362 { 00:22:21.362 "trtype": "TCP" 00:22:21.362 } 00:22:21.362 ] 00:22:21.362 }, 00:22:21.362 { 00:22:21.362 "name": "nvmf_tgt_poll_group_001", 00:22:21.362 "admin_qpairs": 0, 00:22:21.362 "io_qpairs": 1, 00:22:21.362 "current_admin_qpairs": 0, 00:22:21.362 "current_io_qpairs": 1, 00:22:21.362 "pending_bdev_io": 0, 00:22:21.362 "completed_nvme_io": 18338, 00:22:21.362 "transports": [ 00:22:21.362 { 00:22:21.362 "trtype": "TCP" 00:22:21.362 } 00:22:21.362 ] 00:22:21.362 }, 00:22:21.362 { 00:22:21.362 "name": "nvmf_tgt_poll_group_002", 00:22:21.362 "admin_qpairs": 0, 00:22:21.362 "io_qpairs": 1, 00:22:21.362 "current_admin_qpairs": 0, 00:22:21.362 "current_io_qpairs": 1, 00:22:21.362 "pending_bdev_io": 0, 00:22:21.362 "completed_nvme_io": 18253, 00:22:21.362 "transports": [ 00:22:21.362 { 00:22:21.362 "trtype": "TCP" 00:22:21.362 } 00:22:21.362 ] 00:22:21.362 }, 00:22:21.362 { 00:22:21.362 "name": "nvmf_tgt_poll_group_003", 00:22:21.362 "admin_qpairs": 0, 00:22:21.362 "io_qpairs": 1, 00:22:21.362 "current_admin_qpairs": 0, 00:22:21.362 "current_io_qpairs": 1, 00:22:21.362 "pending_bdev_io": 0, 00:22:21.362 "completed_nvme_io": 16924, 00:22:21.362 "transports": [ 00:22:21.362 { 00:22:21.362 "trtype": "TCP" 00:22:21.362 } 00:22:21.362 ] 00:22:21.362 } 00:22:21.362 ] 00:22:21.362 }' 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:21.362 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2040955 00:22:29.504 Initializing NVMe Controllers 00:22:29.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:29.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:29.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:29.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:29.504 Initialization complete. Launching workers. 00:22:29.504 ======================================================== 00:22:29.504 Latency(us) 00:22:29.504 Device Information : IOPS MiB/s Average min max 00:22:29.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12759.21 49.84 5015.59 1273.92 12445.29 00:22:29.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13541.60 52.90 4725.28 1128.14 13798.35 00:22:29.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13456.50 52.56 4755.56 1316.92 13551.23 00:22:29.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13115.90 51.23 4879.91 1560.78 14126.61 00:22:29.504 ======================================================== 00:22:29.504 Total : 52873.21 206.54 4841.40 1128.14 14126.61 00:22:29.504 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.504 rmmod nvme_tcp 00:22:29.504 rmmod nvme_fabrics 00:22:29.504 rmmod nvme_keyring 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2040622 ']' 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2040622 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2040622 ']' 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2040622 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.504 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2040622 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2040622' 00:22:29.505 killing process with pid 2040622 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2040622 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2040622 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.505 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.416 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.416 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:31.416 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:31.416 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:33.328 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:35.237 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:40.533 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:40.534 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:40.534 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:40.534 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:40.534 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:22:40.534 00:22:40.534 --- 10.0.0.2 ping statistics --- 00:22:40.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.534 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:22:40.534 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:22:40.535 00:22:40.535 --- 10.0.0.1 ping statistics --- 00:22:40.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.535 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:40.535 net.core.busy_poll = 1 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:40.535 net.core.busy_read = 1 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2045412 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2045412 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2045412 ']' 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.535 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.796 [2024-11-19 18:21:42.044519] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:22:40.796 [2024-11-19 18:21:42.044586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.796 [2024-11-19 18:21:42.144929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.796 [2024-11-19 18:21:42.198350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.796 [2024-11-19 18:21:42.198400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.796 [2024-11-19 18:21:42.198409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.796 [2024-11-19 18:21:42.198416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.796 [2024-11-19 18:21:42.198422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.796 [2024-11-19 18:21:42.200788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.796 [2024-11-19 18:21:42.200949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.796 [2024-11-19 18:21:42.201109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.796 [2024-11-19 18:21:42.201112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.738 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.738 [2024-11-19 18:21:43.069353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.738 Malloc1 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.738 [2024-11-19 18:21:43.148382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2045768 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:41.738 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:44.283 "tick_rate": 2400000000, 00:22:44.283 "poll_groups": [ 00:22:44.283 { 00:22:44.283 "name": "nvmf_tgt_poll_group_000", 00:22:44.283 "admin_qpairs": 1, 00:22:44.283 "io_qpairs": 4, 00:22:44.283 "current_admin_qpairs": 1, 00:22:44.283 "current_io_qpairs": 4, 00:22:44.283 "pending_bdev_io": 0, 00:22:44.283 "completed_nvme_io": 33145, 00:22:44.283 "transports": [ 00:22:44.283 { 00:22:44.283 "trtype": "TCP" 00:22:44.283 } 00:22:44.283 ] 00:22:44.283 }, 00:22:44.283 { 00:22:44.283 "name": "nvmf_tgt_poll_group_001", 00:22:44.283 "admin_qpairs": 0, 00:22:44.283 "io_qpairs": 0, 00:22:44.283 "current_admin_qpairs": 0, 00:22:44.283 "current_io_qpairs": 0, 00:22:44.283 "pending_bdev_io": 0, 00:22:44.283 "completed_nvme_io": 0, 00:22:44.283 "transports": [ 00:22:44.283 { 00:22:44.283 "trtype": "TCP" 00:22:44.283 } 00:22:44.283 ] 00:22:44.283 }, 00:22:44.283 { 00:22:44.283 "name": "nvmf_tgt_poll_group_002", 00:22:44.283 "admin_qpairs": 0, 00:22:44.283 "io_qpairs": 0, 00:22:44.283 "current_admin_qpairs": 0, 00:22:44.283 "current_io_qpairs": 0, 00:22:44.283 "pending_bdev_io": 0, 00:22:44.283 "completed_nvme_io": 0, 00:22:44.283 "transports": [ 00:22:44.283 { 00:22:44.283 "trtype": "TCP" 00:22:44.283 } 00:22:44.283 ] 00:22:44.283 }, 00:22:44.283 { 00:22:44.283 "name": "nvmf_tgt_poll_group_003", 00:22:44.283 "admin_qpairs": 0, 00:22:44.283 "io_qpairs": 0, 00:22:44.283 "current_admin_qpairs": 0, 00:22:44.283 "current_io_qpairs": 0, 00:22:44.283 "pending_bdev_io": 0, 00:22:44.283 "completed_nvme_io": 0, 00:22:44.283 "transports": [ 00:22:44.283 { 00:22:44.283 "trtype": "TCP" 00:22:44.283 } 00:22:44.283 ] 00:22:44.283 } 00:22:44.283 ] 00:22:44.283 }' 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:22:44.283 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2045768 00:22:52.421 Initializing NVMe Controllers 00:22:52.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:52.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:52.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:52.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:52.421 Initialization complete. Launching workers. 00:22:52.421 ======================================================== 00:22:52.421 Latency(us) 00:22:52.421 Device Information : IOPS MiB/s Average min max 00:22:52.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5980.30 23.36 10746.14 1391.87 60939.10 00:22:52.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5927.30 23.15 10797.80 1236.29 59789.21 00:22:52.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6534.90 25.53 9800.91 964.58 58271.56 00:22:52.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5857.40 22.88 10926.00 978.72 56683.68 00:22:52.421 ======================================================== 00:22:52.421 Total : 24299.89 94.92 10547.90 964.58 60939.10 00:22:52.421 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.421 rmmod nvme_tcp 00:22:52.421 rmmod nvme_fabrics 00:22:52.421 rmmod nvme_keyring 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2045412 ']' 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2045412 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2045412 ']' 00:22:52.421 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2045412 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2045412 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2045412' 00:22:52.422 killing process with pid 2045412 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2045412 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2045412 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.422 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.338 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:54.338 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:54.338 00:22:54.338 real 0m53.259s 00:22:54.338 user 2m50.677s 00:22:54.338 sys 0m10.975s 00:22:54.338 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.338 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.338 ************************************ 00:22:54.338 END TEST nvmf_perf_adq 00:22:54.338 ************************************ 00:22:54.338 18:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:54.338 18:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:54.338 18:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.338 18:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:54.338 ************************************ 00:22:54.338 START TEST nvmf_shutdown 00:22:54.338 ************************************ 00:22:54.338 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:54.600 * Looking for test storage... 00:22:54.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.600 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:54.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.601 --rc genhtml_branch_coverage=1 00:22:54.601 --rc genhtml_function_coverage=1 00:22:54.601 --rc genhtml_legend=1 00:22:54.601 --rc geninfo_all_blocks=1 00:22:54.601 --rc geninfo_unexecuted_blocks=1 00:22:54.601 00:22:54.601 ' 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:54.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.601 --rc genhtml_branch_coverage=1 00:22:54.601 --rc genhtml_function_coverage=1 00:22:54.601 --rc genhtml_legend=1 00:22:54.601 --rc geninfo_all_blocks=1 00:22:54.601 --rc geninfo_unexecuted_blocks=1 00:22:54.601 00:22:54.601 ' 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:54.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.601 --rc genhtml_branch_coverage=1 00:22:54.601 --rc genhtml_function_coverage=1 00:22:54.601 --rc genhtml_legend=1 00:22:54.601 --rc geninfo_all_blocks=1 00:22:54.601 --rc geninfo_unexecuted_blocks=1 00:22:54.601 00:22:54.601 ' 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:54.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.601 --rc genhtml_branch_coverage=1 00:22:54.601 --rc genhtml_function_coverage=1 00:22:54.601 --rc genhtml_legend=1 00:22:54.601 --rc geninfo_all_blocks=1 00:22:54.601 --rc geninfo_unexecuted_blocks=1 00:22:54.601 00:22:54.601 ' 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.601 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:54.601 ************************************ 00:22:54.601 START TEST nvmf_shutdown_tc1 00:22:54.601 ************************************ 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:54.601 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.602 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.907 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:02.908 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:02.908 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:02.908 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:02.908 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:02.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:23:02.908 00:23:02.908 --- 10.0.0.2 ping statistics --- 00:23:02.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.908 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:23:02.908 00:23:02.908 --- 10.0.0.1 ping statistics --- 00:23:02.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.908 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.908 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2051946 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2051946 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2051946 ']' 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.909 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.909 [2024-11-19 18:22:03.693874] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:02.909 [2024-11-19 18:22:03.693938] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.909 [2024-11-19 18:22:03.795663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.909 [2024-11-19 18:22:03.848933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.909 [2024-11-19 18:22:03.848986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.909 [2024-11-19 18:22:03.848995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.909 [2024-11-19 18:22:03.849002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.909 [2024-11-19 18:22:03.849008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.909 [2024-11-19 18:22:03.851087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.909 [2024-11-19 18:22:03.851231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.909 [2024-11-19 18:22:03.851475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:02.909 [2024-11-19 18:22:03.851476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.169 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.169 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:03.169 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.169 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.170 [2024-11-19 18:22:04.575988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.170 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.430 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.430 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:03.430 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:03.430 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:03.430 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.430 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.430 Malloc1 00:23:03.430 [2024-11-19 18:22:04.697938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.430 Malloc2 00:23:03.430 Malloc3 00:23:03.430 Malloc4 00:23:03.430 Malloc5 00:23:03.691 Malloc6 00:23:03.691 Malloc7 00:23:03.691 Malloc8 00:23:03.691 Malloc9 00:23:03.691 Malloc10 00:23:03.691 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.691 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:03.691 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.691 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.691 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2052302 00:23:03.691 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2052302 /var/tmp/bdevperf.sock 00:23:03.691 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2052302 ']' 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.952 { 00:23:03.952 "params": { 00:23:03.952 "name": "Nvme$subsystem", 00:23:03.952 "trtype": "$TEST_TRANSPORT", 00:23:03.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.952 "adrfam": "ipv4", 00:23:03.952 "trsvcid": "$NVMF_PORT", 00:23:03.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.952 "hdgst": ${hdgst:-false}, 00:23:03.952 "ddgst": ${ddgst:-false} 00:23:03.952 }, 00:23:03.952 "method": "bdev_nvme_attach_controller" 00:23:03.952 } 00:23:03.952 EOF 00:23:03.952 )") 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.952 { 00:23:03.952 "params": { 00:23:03.952 "name": "Nvme$subsystem", 00:23:03.952 "trtype": "$TEST_TRANSPORT", 00:23:03.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.952 "adrfam": "ipv4", 00:23:03.952 "trsvcid": "$NVMF_PORT", 00:23:03.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.952 "hdgst": ${hdgst:-false}, 00:23:03.952 "ddgst": ${ddgst:-false} 00:23:03.952 }, 00:23:03.952 "method": "bdev_nvme_attach_controller" 00:23:03.952 } 00:23:03.952 EOF 00:23:03.952 )") 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.952 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.953 { 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme$subsystem", 00:23:03.953 "trtype": "$TEST_TRANSPORT", 00:23:03.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "$NVMF_PORT", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.953 "hdgst": ${hdgst:-false}, 00:23:03.953 "ddgst": ${ddgst:-false} 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 } 00:23:03.953 EOF 00:23:03.953 )") 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.953 { 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme$subsystem", 00:23:03.953 "trtype": "$TEST_TRANSPORT", 00:23:03.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "$NVMF_PORT", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.953 "hdgst": ${hdgst:-false}, 00:23:03.953 "ddgst": ${ddgst:-false} 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 } 00:23:03.953 EOF 00:23:03.953 )") 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.953 { 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme$subsystem", 00:23:03.953 "trtype": "$TEST_TRANSPORT", 00:23:03.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "$NVMF_PORT", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.953 "hdgst": ${hdgst:-false}, 00:23:03.953 "ddgst": ${ddgst:-false} 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 } 00:23:03.953 EOF 00:23:03.953 )") 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.953 { 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme$subsystem", 00:23:03.953 "trtype": "$TEST_TRANSPORT", 00:23:03.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "$NVMF_PORT", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.953 "hdgst": ${hdgst:-false}, 00:23:03.953 "ddgst": ${ddgst:-false} 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 } 00:23:03.953 EOF 00:23:03.953 )") 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.953 [2024-11-19 18:22:05.208578] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:03.953 [2024-11-19 18:22:05.208652] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.953 { 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme$subsystem", 00:23:03.953 "trtype": "$TEST_TRANSPORT", 00:23:03.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "$NVMF_PORT", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.953 "hdgst": ${hdgst:-false}, 00:23:03.953 "ddgst": ${ddgst:-false} 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 } 00:23:03.953 EOF 00:23:03.953 )") 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.953 { 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme$subsystem", 00:23:03.953 "trtype": "$TEST_TRANSPORT", 00:23:03.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "$NVMF_PORT", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.953 "hdgst": ${hdgst:-false}, 00:23:03.953 "ddgst": ${ddgst:-false} 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 } 00:23:03.953 EOF 00:23:03.953 )") 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.953 { 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme$subsystem", 00:23:03.953 "trtype": "$TEST_TRANSPORT", 00:23:03.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "$NVMF_PORT", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.953 "hdgst": ${hdgst:-false}, 00:23:03.953 "ddgst": ${ddgst:-false} 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 } 00:23:03.953 EOF 00:23:03.953 )") 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:03.953 { 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme$subsystem", 00:23:03.953 "trtype": "$TEST_TRANSPORT", 00:23:03.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "$NVMF_PORT", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.953 "hdgst": ${hdgst:-false}, 00:23:03.953 "ddgst": ${ddgst:-false} 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 } 00:23:03.953 EOF 00:23:03.953 )") 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:03.953 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme1", 00:23:03.953 "trtype": "tcp", 00:23:03.953 "traddr": "10.0.0.2", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "4420", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.953 "hdgst": false, 00:23:03.953 "ddgst": false 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 },{ 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme2", 00:23:03.953 "trtype": "tcp", 00:23:03.953 "traddr": "10.0.0.2", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "4420", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:03.953 "hdgst": false, 00:23:03.953 "ddgst": false 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 },{ 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme3", 00:23:03.953 "trtype": "tcp", 00:23:03.953 "traddr": "10.0.0.2", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "4420", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:03.953 "hdgst": false, 00:23:03.953 "ddgst": false 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 },{ 00:23:03.953 "params": { 00:23:03.953 "name": "Nvme4", 00:23:03.953 "trtype": "tcp", 00:23:03.953 "traddr": "10.0.0.2", 00:23:03.953 "adrfam": "ipv4", 00:23:03.953 "trsvcid": "4420", 00:23:03.953 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:03.953 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:03.953 "hdgst": false, 00:23:03.953 "ddgst": false 00:23:03.953 }, 00:23:03.953 "method": "bdev_nvme_attach_controller" 00:23:03.953 },{ 00:23:03.953 "params": { 00:23:03.954 "name": "Nvme5", 00:23:03.954 "trtype": "tcp", 00:23:03.954 "traddr": "10.0.0.2", 00:23:03.954 "adrfam": "ipv4", 00:23:03.954 "trsvcid": "4420", 00:23:03.954 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:03.954 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:03.954 "hdgst": false, 00:23:03.954 "ddgst": false 00:23:03.954 }, 00:23:03.954 "method": "bdev_nvme_attach_controller" 00:23:03.954 },{ 00:23:03.954 "params": { 00:23:03.954 "name": "Nvme6", 00:23:03.954 "trtype": "tcp", 00:23:03.954 "traddr": "10.0.0.2", 00:23:03.954 "adrfam": "ipv4", 00:23:03.954 "trsvcid": "4420", 00:23:03.954 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:03.954 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:03.954 "hdgst": false, 00:23:03.954 "ddgst": false 00:23:03.954 }, 00:23:03.954 "method": "bdev_nvme_attach_controller" 00:23:03.954 },{ 00:23:03.954 "params": { 00:23:03.954 "name": "Nvme7", 00:23:03.954 "trtype": "tcp", 00:23:03.954 "traddr": "10.0.0.2", 00:23:03.954 "adrfam": "ipv4", 00:23:03.954 "trsvcid": "4420", 00:23:03.954 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:03.954 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:03.954 "hdgst": false, 00:23:03.954 "ddgst": false 00:23:03.954 }, 00:23:03.954 "method": "bdev_nvme_attach_controller" 00:23:03.954 },{ 00:23:03.954 "params": { 00:23:03.954 "name": "Nvme8", 00:23:03.954 "trtype": "tcp", 00:23:03.954 "traddr": "10.0.0.2", 00:23:03.954 "adrfam": "ipv4", 00:23:03.954 "trsvcid": "4420", 00:23:03.954 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:03.954 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:03.954 "hdgst": false, 00:23:03.954 "ddgst": false 00:23:03.954 }, 00:23:03.954 "method": "bdev_nvme_attach_controller" 00:23:03.954 },{ 00:23:03.954 "params": { 00:23:03.954 "name": "Nvme9", 00:23:03.954 "trtype": "tcp", 00:23:03.954 "traddr": "10.0.0.2", 00:23:03.954 "adrfam": "ipv4", 00:23:03.954 "trsvcid": "4420", 00:23:03.954 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:03.954 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:03.954 "hdgst": false, 00:23:03.954 "ddgst": false 00:23:03.954 }, 00:23:03.954 "method": "bdev_nvme_attach_controller" 00:23:03.954 },{ 00:23:03.954 "params": { 00:23:03.954 "name": "Nvme10", 00:23:03.954 "trtype": "tcp", 00:23:03.954 "traddr": "10.0.0.2", 00:23:03.954 "adrfam": "ipv4", 00:23:03.954 "trsvcid": "4420", 00:23:03.954 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:03.954 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:03.954 "hdgst": false, 00:23:03.954 "ddgst": false 00:23:03.954 }, 00:23:03.954 "method": "bdev_nvme_attach_controller" 00:23:03.954 }' 00:23:03.954 [2024-11-19 18:22:05.304626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.954 [2024-11-19 18:22:05.358120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.341 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.341 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:05.341 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:05.341 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.341 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:05.341 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.341 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2052302 00:23:05.341 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:05.341 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:06.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2052302 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:06.282 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2051946 00:23:06.282 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:06.282 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:06.282 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:06.282 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:06.282 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.282 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.282 { 00:23:06.282 "params": { 00:23:06.282 "name": "Nvme$subsystem", 00:23:06.282 "trtype": "$TEST_TRANSPORT", 00:23:06.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.282 "adrfam": "ipv4", 00:23:06.282 "trsvcid": "$NVMF_PORT", 00:23:06.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.282 "hdgst": ${hdgst:-false}, 00:23:06.282 "ddgst": ${ddgst:-false} 00:23:06.282 }, 00:23:06.282 "method": "bdev_nvme_attach_controller" 00:23:06.282 } 00:23:06.282 EOF 00:23:06.282 )") 00:23:06.282 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.543 { 00:23:06.543 "params": { 00:23:06.543 "name": "Nvme$subsystem", 00:23:06.543 "trtype": "$TEST_TRANSPORT", 00:23:06.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.543 "adrfam": "ipv4", 00:23:06.543 "trsvcid": "$NVMF_PORT", 00:23:06.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.543 "hdgst": ${hdgst:-false}, 00:23:06.543 "ddgst": ${ddgst:-false} 00:23:06.543 }, 00:23:06.543 "method": "bdev_nvme_attach_controller" 00:23:06.543 } 00:23:06.543 EOF 00:23:06.543 )") 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.543 { 00:23:06.543 "params": { 00:23:06.543 "name": "Nvme$subsystem", 00:23:06.543 "trtype": "$TEST_TRANSPORT", 00:23:06.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.543 "adrfam": "ipv4", 00:23:06.543 "trsvcid": "$NVMF_PORT", 00:23:06.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.543 "hdgst": ${hdgst:-false}, 00:23:06.543 "ddgst": ${ddgst:-false} 00:23:06.543 }, 00:23:06.543 "method": "bdev_nvme_attach_controller" 00:23:06.543 } 00:23:06.543 EOF 00:23:06.543 )") 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.543 { 00:23:06.543 "params": { 00:23:06.543 "name": "Nvme$subsystem", 00:23:06.543 "trtype": "$TEST_TRANSPORT", 00:23:06.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.543 "adrfam": "ipv4", 00:23:06.543 "trsvcid": "$NVMF_PORT", 00:23:06.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.543 "hdgst": ${hdgst:-false}, 00:23:06.543 "ddgst": ${ddgst:-false} 00:23:06.543 }, 00:23:06.543 "method": "bdev_nvme_attach_controller" 00:23:06.543 } 00:23:06.543 EOF 00:23:06.543 )") 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.543 { 00:23:06.543 "params": { 00:23:06.543 "name": "Nvme$subsystem", 00:23:06.543 "trtype": "$TEST_TRANSPORT", 00:23:06.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.543 "adrfam": "ipv4", 00:23:06.543 "trsvcid": "$NVMF_PORT", 00:23:06.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.543 "hdgst": ${hdgst:-false}, 00:23:06.543 "ddgst": ${ddgst:-false} 00:23:06.543 }, 00:23:06.543 "method": "bdev_nvme_attach_controller" 00:23:06.543 } 00:23:06.543 EOF 00:23:06.543 )") 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.543 { 00:23:06.543 "params": { 00:23:06.543 "name": "Nvme$subsystem", 00:23:06.543 "trtype": "$TEST_TRANSPORT", 00:23:06.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.543 "adrfam": "ipv4", 00:23:06.543 "trsvcid": "$NVMF_PORT", 00:23:06.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.543 "hdgst": ${hdgst:-false}, 00:23:06.543 "ddgst": ${ddgst:-false} 00:23:06.543 }, 00:23:06.543 "method": "bdev_nvme_attach_controller" 00:23:06.543 } 00:23:06.543 EOF 00:23:06.543 )") 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.543 { 00:23:06.543 "params": { 00:23:06.543 "name": "Nvme$subsystem", 00:23:06.543 "trtype": "$TEST_TRANSPORT", 00:23:06.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.543 "adrfam": "ipv4", 00:23:06.543 "trsvcid": "$NVMF_PORT", 00:23:06.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.543 "hdgst": ${hdgst:-false}, 00:23:06.543 "ddgst": ${ddgst:-false} 00:23:06.543 }, 00:23:06.543 "method": "bdev_nvme_attach_controller" 00:23:06.543 } 00:23:06.543 EOF 00:23:06.543 )") 00:23:06.543 [2024-11-19 18:22:07.793659] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:06.543 [2024-11-19 18:22:07.793712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052947 ] 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.543 { 00:23:06.543 "params": { 00:23:06.543 "name": "Nvme$subsystem", 00:23:06.543 "trtype": "$TEST_TRANSPORT", 00:23:06.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.543 "adrfam": "ipv4", 00:23:06.543 "trsvcid": "$NVMF_PORT", 00:23:06.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.543 "hdgst": ${hdgst:-false}, 00:23:06.543 "ddgst": ${ddgst:-false} 00:23:06.543 }, 00:23:06.543 "method": "bdev_nvme_attach_controller" 00:23:06.543 } 00:23:06.543 EOF 00:23:06.543 )") 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.543 { 00:23:06.543 "params": { 00:23:06.543 "name": "Nvme$subsystem", 00:23:06.543 "trtype": "$TEST_TRANSPORT", 00:23:06.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.543 "adrfam": "ipv4", 00:23:06.543 "trsvcid": "$NVMF_PORT", 00:23:06.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.543 "hdgst": ${hdgst:-false}, 00:23:06.543 "ddgst": ${ddgst:-false} 00:23:06.543 }, 00:23:06.543 "method": "bdev_nvme_attach_controller" 00:23:06.543 } 00:23:06.543 EOF 00:23:06.543 )") 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:06.543 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:06.543 { 00:23:06.543 "params": { 00:23:06.543 "name": "Nvme$subsystem", 00:23:06.544 "trtype": "$TEST_TRANSPORT", 00:23:06.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "$NVMF_PORT", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.544 "hdgst": ${hdgst:-false}, 00:23:06.544 "ddgst": ${ddgst:-false} 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 } 00:23:06.544 EOF 00:23:06.544 )") 00:23:06.544 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:06.544 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:06.544 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:06.544 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme1", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 },{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme2", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 },{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme3", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 },{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme4", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 },{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme5", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 },{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme6", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 },{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme7", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 },{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme8", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 },{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme9", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 },{ 00:23:06.544 "params": { 00:23:06.544 "name": "Nvme10", 00:23:06.544 "trtype": "tcp", 00:23:06.544 "traddr": "10.0.0.2", 00:23:06.544 "adrfam": "ipv4", 00:23:06.544 "trsvcid": "4420", 00:23:06.544 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:06.544 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:06.544 "hdgst": false, 00:23:06.544 "ddgst": false 00:23:06.544 }, 00:23:06.544 "method": "bdev_nvme_attach_controller" 00:23:06.544 }' 00:23:06.544 [2024-11-19 18:22:07.881300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.544 [2024-11-19 18:22:07.917182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.936 Running I/O for 1 seconds... 00:23:09.141 1871.00 IOPS, 116.94 MiB/s 00:23:09.141 Latency(us) 00:23:09.141 [2024-11-19T17:22:10.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.141 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme1n1 : 1.13 226.07 14.13 0.00 0.00 280309.76 15619.41 255153.49 00:23:09.141 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme2n1 : 1.17 227.13 14.20 0.00 0.00 264341.93 19442.35 239424.85 00:23:09.141 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme3n1 : 1.12 229.17 14.32 0.00 0.00 267060.48 14636.37 253405.87 00:23:09.141 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme4n1 : 1.17 272.59 17.04 0.00 0.00 220630.02 17803.95 249910.61 00:23:09.141 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme5n1 : 1.14 225.12 14.07 0.00 0.00 262463.57 15947.09 251658.24 00:23:09.141 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme6n1 : 1.14 223.90 13.99 0.00 0.00 259405.87 19551.57 269134.51 00:23:09.141 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme7n1 : 1.13 227.40 14.21 0.00 0.00 250211.84 20097.71 249910.61 00:23:09.141 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme8n1 : 1.18 271.22 16.95 0.00 0.00 206958.59 12397.23 274377.39 00:23:09.141 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme9n1 : 1.18 225.64 14.10 0.00 0.00 242981.19 2826.24 253405.87 00:23:09.141 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.141 Verification LBA range: start 0x0 length 0x400 00:23:09.141 Nvme10n1 : 1.19 268.67 16.79 0.00 0.00 201931.78 9011.20 269134.51 00:23:09.141 [2024-11-19T17:22:10.612Z] =================================================================================================================== 00:23:09.141 [2024-11-19T17:22:10.612Z] Total : 2396.92 149.81 0.00 0.00 243200.91 2826.24 274377.39 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:09.141 rmmod nvme_tcp 00:23:09.141 rmmod nvme_fabrics 00:23:09.141 rmmod nvme_keyring 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2051946 ']' 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2051946 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2051946 ']' 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2051946 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:09.141 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.401 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2051946 00:23:09.401 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:09.401 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:09.401 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2051946' 00:23:09.401 killing process with pid 2051946 00:23:09.401 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2051946 00:23:09.401 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2051946 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.662 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.577 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.577 00:23:11.577 real 0m16.923s 00:23:11.577 user 0m34.266s 00:23:11.577 sys 0m6.923s 00:23:11.577 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.577 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.577 ************************************ 00:23:11.577 END TEST nvmf_shutdown_tc1 00:23:11.577 ************************************ 00:23:11.577 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:11.577 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:11.577 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.577 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:11.839 ************************************ 00:23:11.839 START TEST nvmf_shutdown_tc2 00:23:11.839 ************************************ 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:11.839 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:11.839 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.839 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:11.840 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:11.840 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:11.840 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:23:12.101 00:23:12.101 --- 10.0.0.2 ping statistics --- 00:23:12.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.101 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:23:12.101 00:23:12.101 --- 10.0.0.1 ping statistics --- 00:23:12.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.101 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.101 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2054105 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2054105 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2054105 ']' 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.102 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.102 [2024-11-19 18:22:13.496076] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:12.102 [2024-11-19 18:22:13.496141] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.363 [2024-11-19 18:22:13.591834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.363 [2024-11-19 18:22:13.625969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.363 [2024-11-19 18:22:13.625996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.363 [2024-11-19 18:22:13.626002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.363 [2024-11-19 18:22:13.626007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.363 [2024-11-19 18:22:13.626011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.363 [2024-11-19 18:22:13.627588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.363 [2024-11-19 18:22:13.627742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.363 [2024-11-19 18:22:13.627893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.363 [2024-11-19 18:22:13.627895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.935 [2024-11-19 18:22:14.338492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:12.935 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.196 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.196 Malloc1 00:23:13.196 [2024-11-19 18:22:14.448080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.196 Malloc2 00:23:13.196 Malloc3 00:23:13.196 Malloc4 00:23:13.196 Malloc5 00:23:13.196 Malloc6 00:23:13.196 Malloc7 00:23:13.458 Malloc8 00:23:13.458 Malloc9 00:23:13.458 Malloc10 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2054485 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2054485 /var/tmp/bdevperf.sock 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2054485 ']' 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.458 { 00:23:13.458 "params": { 00:23:13.458 "name": "Nvme$subsystem", 00:23:13.458 "trtype": "$TEST_TRANSPORT", 00:23:13.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.458 "adrfam": "ipv4", 00:23:13.458 "trsvcid": "$NVMF_PORT", 00:23:13.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.458 "hdgst": ${hdgst:-false}, 00:23:13.458 "ddgst": ${ddgst:-false} 00:23:13.458 }, 00:23:13.458 "method": "bdev_nvme_attach_controller" 00:23:13.458 } 00:23:13.458 EOF 00:23:13.458 )") 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.458 { 00:23:13.458 "params": { 00:23:13.458 "name": "Nvme$subsystem", 00:23:13.458 "trtype": "$TEST_TRANSPORT", 00:23:13.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.458 "adrfam": "ipv4", 00:23:13.458 "trsvcid": "$NVMF_PORT", 00:23:13.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.458 "hdgst": ${hdgst:-false}, 00:23:13.458 "ddgst": ${ddgst:-false} 00:23:13.458 }, 00:23:13.458 "method": "bdev_nvme_attach_controller" 00:23:13.458 } 00:23:13.458 EOF 00:23:13.458 )") 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.458 { 00:23:13.458 "params": { 00:23:13.458 "name": "Nvme$subsystem", 00:23:13.458 "trtype": "$TEST_TRANSPORT", 00:23:13.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.458 "adrfam": "ipv4", 00:23:13.458 "trsvcid": "$NVMF_PORT", 00:23:13.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.458 "hdgst": ${hdgst:-false}, 00:23:13.458 "ddgst": ${ddgst:-false} 00:23:13.458 }, 00:23:13.458 "method": "bdev_nvme_attach_controller" 00:23:13.458 } 00:23:13.458 EOF 00:23:13.458 )") 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.458 { 00:23:13.458 "params": { 00:23:13.458 "name": "Nvme$subsystem", 00:23:13.458 "trtype": "$TEST_TRANSPORT", 00:23:13.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.458 "adrfam": "ipv4", 00:23:13.458 "trsvcid": "$NVMF_PORT", 00:23:13.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.458 "hdgst": ${hdgst:-false}, 00:23:13.458 "ddgst": ${ddgst:-false} 00:23:13.458 }, 00:23:13.458 "method": "bdev_nvme_attach_controller" 00:23:13.458 } 00:23:13.458 EOF 00:23:13.458 )") 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.458 { 00:23:13.458 "params": { 00:23:13.458 "name": "Nvme$subsystem", 00:23:13.458 "trtype": "$TEST_TRANSPORT", 00:23:13.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.458 "adrfam": "ipv4", 00:23:13.458 "trsvcid": "$NVMF_PORT", 00:23:13.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.458 "hdgst": ${hdgst:-false}, 00:23:13.458 "ddgst": ${ddgst:-false} 00:23:13.458 }, 00:23:13.458 "method": "bdev_nvme_attach_controller" 00:23:13.458 } 00:23:13.458 EOF 00:23:13.458 )") 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.458 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.459 { 00:23:13.459 "params": { 00:23:13.459 "name": "Nvme$subsystem", 00:23:13.459 "trtype": "$TEST_TRANSPORT", 00:23:13.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.459 "adrfam": "ipv4", 00:23:13.459 "trsvcid": "$NVMF_PORT", 00:23:13.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.459 "hdgst": ${hdgst:-false}, 00:23:13.459 "ddgst": ${ddgst:-false} 00:23:13.459 }, 00:23:13.459 "method": "bdev_nvme_attach_controller" 00:23:13.459 } 00:23:13.459 EOF 00:23:13.459 )") 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.459 [2024-11-19 18:22:14.896822] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:13.459 [2024-11-19 18:22:14.896876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054485 ] 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.459 { 00:23:13.459 "params": { 00:23:13.459 "name": "Nvme$subsystem", 00:23:13.459 "trtype": "$TEST_TRANSPORT", 00:23:13.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.459 "adrfam": "ipv4", 00:23:13.459 "trsvcid": "$NVMF_PORT", 00:23:13.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.459 "hdgst": ${hdgst:-false}, 00:23:13.459 "ddgst": ${ddgst:-false} 00:23:13.459 }, 00:23:13.459 "method": "bdev_nvme_attach_controller" 00:23:13.459 } 00:23:13.459 EOF 00:23:13.459 )") 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.459 { 00:23:13.459 "params": { 00:23:13.459 "name": "Nvme$subsystem", 00:23:13.459 "trtype": "$TEST_TRANSPORT", 00:23:13.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.459 "adrfam": "ipv4", 00:23:13.459 "trsvcid": "$NVMF_PORT", 00:23:13.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.459 "hdgst": ${hdgst:-false}, 00:23:13.459 "ddgst": ${ddgst:-false} 00:23:13.459 }, 00:23:13.459 "method": "bdev_nvme_attach_controller" 00:23:13.459 } 00:23:13.459 EOF 00:23:13.459 )") 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.459 { 00:23:13.459 "params": { 00:23:13.459 "name": "Nvme$subsystem", 00:23:13.459 "trtype": "$TEST_TRANSPORT", 00:23:13.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.459 "adrfam": "ipv4", 00:23:13.459 "trsvcid": "$NVMF_PORT", 00:23:13.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.459 "hdgst": ${hdgst:-false}, 00:23:13.459 "ddgst": ${ddgst:-false} 00:23:13.459 }, 00:23:13.459 "method": "bdev_nvme_attach_controller" 00:23:13.459 } 00:23:13.459 EOF 00:23:13.459 )") 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.459 { 00:23:13.459 "params": { 00:23:13.459 "name": "Nvme$subsystem", 00:23:13.459 "trtype": "$TEST_TRANSPORT", 00:23:13.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.459 "adrfam": "ipv4", 00:23:13.459 "trsvcid": "$NVMF_PORT", 00:23:13.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.459 "hdgst": ${hdgst:-false}, 00:23:13.459 "ddgst": ${ddgst:-false} 00:23:13.459 }, 00:23:13.459 "method": "bdev_nvme_attach_controller" 00:23:13.459 } 00:23:13.459 EOF 00:23:13.459 )") 00:23:13.459 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:13.720 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:13.720 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:13.720 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme1", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 },{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme2", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 },{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme3", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 },{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme4", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 },{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme5", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 },{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme6", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 },{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme7", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 },{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme8", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 },{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme9", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 },{ 00:23:13.720 "params": { 00:23:13.720 "name": "Nvme10", 00:23:13.720 "trtype": "tcp", 00:23:13.720 "traddr": "10.0.0.2", 00:23:13.720 "adrfam": "ipv4", 00:23:13.720 "trsvcid": "4420", 00:23:13.720 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:13.720 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:13.720 "hdgst": false, 00:23:13.720 "ddgst": false 00:23:13.720 }, 00:23:13.720 "method": "bdev_nvme_attach_controller" 00:23:13.720 }' 00:23:13.720 [2024-11-19 18:22:14.986908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.720 [2024-11-19 18:22:15.023057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.632 Running I/O for 10 seconds... 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:15.632 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:15.892 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2054485 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2054485 ']' 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2054485 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2054485 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2054485' 00:23:16.153 killing process with pid 2054485 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2054485 00:23:16.153 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2054485 00:23:16.413 Received shutdown signal, test time was about 0.979488 seconds 00:23:16.413 00:23:16.413 Latency(us) 00:23:16.413 [2024-11-19T17:22:17.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.413 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.413 Verification LBA range: start 0x0 length 0x400 00:23:16.413 Nvme1n1 : 0.98 261.60 16.35 0.00 0.00 241840.64 17367.04 237677.23 00:23:16.413 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.413 Verification LBA range: start 0x0 length 0x400 00:23:16.414 Nvme2n1 : 0.94 204.91 12.81 0.00 0.00 302163.34 20643.84 248162.99 00:23:16.414 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.414 Verification LBA range: start 0x0 length 0x400 00:23:16.414 Nvme3n1 : 0.97 264.58 16.54 0.00 0.00 229398.83 17803.95 272629.76 00:23:16.414 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.414 Verification LBA range: start 0x0 length 0x400 00:23:16.414 Nvme4n1 : 0.96 265.76 16.61 0.00 0.00 223469.44 21845.33 248162.99 00:23:16.414 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.414 Verification LBA range: start 0x0 length 0x400 00:23:16.414 Nvme5n1 : 0.97 268.20 16.76 0.00 0.00 216090.15 4396.37 248162.99 00:23:16.414 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.414 Verification LBA range: start 0x0 length 0x400 00:23:16.414 Nvme6n1 : 0.97 262.84 16.43 0.00 0.00 215672.11 19223.89 241172.48 00:23:16.414 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.414 Verification LBA range: start 0x0 length 0x400 00:23:16.414 Nvme7n1 : 0.95 202.76 12.67 0.00 0.00 273039.64 19988.48 248162.99 00:23:16.414 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.414 Verification LBA range: start 0x0 length 0x400 00:23:16.414 Nvme8n1 : 0.97 263.19 16.45 0.00 0.00 206131.20 23811.41 244667.73 00:23:16.414 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.414 Verification LBA range: start 0x0 length 0x400 00:23:16.414 Nvme9n1 : 0.95 201.17 12.57 0.00 0.00 263071.00 15728.64 253405.87 00:23:16.414 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:16.414 Verification LBA range: start 0x0 length 0x400 00:23:16.414 Nvme10n1 : 0.96 200.09 12.51 0.00 0.00 258424.60 16056.32 270882.13 00:23:16.414 [2024-11-19T17:22:17.885Z] =================================================================================================================== 00:23:16.414 [2024-11-19T17:22:17.885Z] Total : 2395.10 149.69 0.00 0.00 239417.98 4396.37 272629.76 00:23:16.414 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2054105 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.354 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.354 rmmod nvme_tcp 00:23:17.354 rmmod nvme_fabrics 00:23:17.615 rmmod nvme_keyring 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2054105 ']' 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2054105 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2054105 ']' 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2054105 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2054105 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2054105' 00:23:17.615 killing process with pid 2054105 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2054105 00:23:17.615 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2054105 00:23:17.874 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:17.874 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:17.874 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:17.874 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:17.874 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:17.874 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:17.874 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:17.874 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.874 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.875 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.875 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.875 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.786 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.786 00:23:19.786 real 0m8.162s 00:23:19.786 user 0m25.174s 00:23:19.786 sys 0m1.309s 00:23:19.786 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.786 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.786 ************************************ 00:23:19.786 END TEST nvmf_shutdown_tc2 00:23:19.786 ************************************ 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:20.048 ************************************ 00:23:20.048 START TEST nvmf_shutdown_tc3 00:23:20.048 ************************************ 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:20.048 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:20.048 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.048 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:20.049 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:20.049 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.049 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:23:20.310 00:23:20.310 --- 10.0.0.2 ping statistics --- 00:23:20.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.310 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:23:20.310 00:23:20.310 --- 10.0.0.1 ping statistics --- 00:23:20.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.310 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.310 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2055814 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2055814 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2055814 ']' 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.311 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 [2024-11-19 18:22:21.739022] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:20.311 [2024-11-19 18:22:21.739075] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.570 [2024-11-19 18:22:21.822486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.570 [2024-11-19 18:22:21.853433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.570 [2024-11-19 18:22:21.853458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.570 [2024-11-19 18:22:21.853465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.570 [2024-11-19 18:22:21.853470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.570 [2024-11-19 18:22:21.853474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.570 [2024-11-19 18:22:21.854902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.570 [2024-11-19 18:22:21.855062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.570 [2024-11-19 18:22:21.855434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:20.570 [2024-11-19 18:22:21.855533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.140 [2024-11-19 18:22:22.587486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.140 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.401 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.401 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.401 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.401 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.402 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.402 Malloc1 00:23:21.402 [2024-11-19 18:22:22.697914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.402 Malloc2 00:23:21.402 Malloc3 00:23:21.402 Malloc4 00:23:21.402 Malloc5 00:23:21.402 Malloc6 00:23:21.663 Malloc7 00:23:21.663 Malloc8 00:23:21.663 Malloc9 00:23:21.663 Malloc10 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2056033 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2056033 /var/tmp/bdevperf.sock 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2056033 ']' 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.663 { 00:23:21.663 "params": { 00:23:21.663 "name": "Nvme$subsystem", 00:23:21.663 "trtype": "$TEST_TRANSPORT", 00:23:21.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.663 "adrfam": "ipv4", 00:23:21.663 "trsvcid": "$NVMF_PORT", 00:23:21.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.663 "hdgst": ${hdgst:-false}, 00:23:21.663 "ddgst": ${ddgst:-false} 00:23:21.663 }, 00:23:21.663 "method": "bdev_nvme_attach_controller" 00:23:21.663 } 00:23:21.663 EOF 00:23:21.663 )") 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.663 { 00:23:21.663 "params": { 00:23:21.663 "name": "Nvme$subsystem", 00:23:21.663 "trtype": "$TEST_TRANSPORT", 00:23:21.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.663 "adrfam": "ipv4", 00:23:21.663 "trsvcid": "$NVMF_PORT", 00:23:21.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.663 "hdgst": ${hdgst:-false}, 00:23:21.663 "ddgst": ${ddgst:-false} 00:23:21.663 }, 00:23:21.663 "method": "bdev_nvme_attach_controller" 00:23:21.663 } 00:23:21.663 EOF 00:23:21.663 )") 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.663 { 00:23:21.663 "params": { 00:23:21.663 "name": "Nvme$subsystem", 00:23:21.663 "trtype": "$TEST_TRANSPORT", 00:23:21.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.663 "adrfam": "ipv4", 00:23:21.663 "trsvcid": "$NVMF_PORT", 00:23:21.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.663 "hdgst": ${hdgst:-false}, 00:23:21.663 "ddgst": ${ddgst:-false} 00:23:21.663 }, 00:23:21.663 "method": "bdev_nvme_attach_controller" 00:23:21.663 } 00:23:21.663 EOF 00:23:21.663 )") 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.663 { 00:23:21.663 "params": { 00:23:21.663 "name": "Nvme$subsystem", 00:23:21.663 "trtype": "$TEST_TRANSPORT", 00:23:21.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.663 "adrfam": "ipv4", 00:23:21.663 "trsvcid": "$NVMF_PORT", 00:23:21.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.663 "hdgst": ${hdgst:-false}, 00:23:21.663 "ddgst": ${ddgst:-false} 00:23:21.663 }, 00:23:21.663 "method": "bdev_nvme_attach_controller" 00:23:21.663 } 00:23:21.663 EOF 00:23:21.663 )") 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.663 { 00:23:21.663 "params": { 00:23:21.663 "name": "Nvme$subsystem", 00:23:21.663 "trtype": "$TEST_TRANSPORT", 00:23:21.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.663 "adrfam": "ipv4", 00:23:21.663 "trsvcid": "$NVMF_PORT", 00:23:21.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.663 "hdgst": ${hdgst:-false}, 00:23:21.663 "ddgst": ${ddgst:-false} 00:23:21.663 }, 00:23:21.663 "method": "bdev_nvme_attach_controller" 00:23:21.663 } 00:23:21.663 EOF 00:23:21.663 )") 00:23:21.663 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.924 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.924 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.924 { 00:23:21.924 "params": { 00:23:21.924 "name": "Nvme$subsystem", 00:23:21.924 "trtype": "$TEST_TRANSPORT", 00:23:21.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.924 "adrfam": "ipv4", 00:23:21.924 "trsvcid": "$NVMF_PORT", 00:23:21.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.924 "hdgst": ${hdgst:-false}, 00:23:21.924 "ddgst": ${ddgst:-false} 00:23:21.924 }, 00:23:21.924 "method": "bdev_nvme_attach_controller" 00:23:21.924 } 00:23:21.924 EOF 00:23:21.924 )") 00:23:21.924 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.924 [2024-11-19 18:22:23.140445] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:21.924 [2024-11-19 18:22:23.140497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056033 ] 00:23:21.924 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.924 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.924 { 00:23:21.924 "params": { 00:23:21.924 "name": "Nvme$subsystem", 00:23:21.924 "trtype": "$TEST_TRANSPORT", 00:23:21.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.924 "adrfam": "ipv4", 00:23:21.924 "trsvcid": "$NVMF_PORT", 00:23:21.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.924 "hdgst": ${hdgst:-false}, 00:23:21.924 "ddgst": ${ddgst:-false} 00:23:21.924 }, 00:23:21.924 "method": "bdev_nvme_attach_controller" 00:23:21.924 } 00:23:21.924 EOF 00:23:21.925 )") 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.925 { 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme$subsystem", 00:23:21.925 "trtype": "$TEST_TRANSPORT", 00:23:21.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "$NVMF_PORT", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.925 "hdgst": ${hdgst:-false}, 00:23:21.925 "ddgst": ${ddgst:-false} 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 } 00:23:21.925 EOF 00:23:21.925 )") 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.925 { 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme$subsystem", 00:23:21.925 "trtype": "$TEST_TRANSPORT", 00:23:21.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "$NVMF_PORT", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.925 "hdgst": ${hdgst:-false}, 00:23:21.925 "ddgst": ${ddgst:-false} 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 } 00:23:21.925 EOF 00:23:21.925 )") 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.925 { 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme$subsystem", 00:23:21.925 "trtype": "$TEST_TRANSPORT", 00:23:21.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "$NVMF_PORT", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.925 "hdgst": ${hdgst:-false}, 00:23:21.925 "ddgst": ${ddgst:-false} 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 } 00:23:21.925 EOF 00:23:21.925 )") 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:21.925 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme1", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 },{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme2", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 },{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme3", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 },{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme4", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 },{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme5", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 },{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme6", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 },{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme7", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 },{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme8", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 },{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme9", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 },{ 00:23:21.925 "params": { 00:23:21.925 "name": "Nvme10", 00:23:21.925 "trtype": "tcp", 00:23:21.925 "traddr": "10.0.0.2", 00:23:21.925 "adrfam": "ipv4", 00:23:21.925 "trsvcid": "4420", 00:23:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:21.925 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:21.925 "hdgst": false, 00:23:21.925 "ddgst": false 00:23:21.925 }, 00:23:21.925 "method": "bdev_nvme_attach_controller" 00:23:21.925 }' 00:23:21.925 [2024-11-19 18:22:23.229388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.925 [2024-11-19 18:22:23.265588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.309 Running I/O for 10 seconds... 00:23:23.309 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.309 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:23.309 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:23.309 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.309 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:23.569 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:23.829 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:24.089 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:24.089 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:24.089 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:24.089 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:24.089 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.089 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2055814 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2055814 ']' 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2055814 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2055814 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2055814' 00:23:24.366 killing process with pid 2055814 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2055814 00:23:24.366 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2055814 00:23:24.366 [2024-11-19 18:22:25.647149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.366 [2024-11-19 18:22:25.647339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.647515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c110 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.367 [2024-11-19 18:22:25.649724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.649835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3c600 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.368 [2024-11-19 18:22:25.651490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.651495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.651499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.651504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cad0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.652673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cfc0 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.369 [2024-11-19 18:22:25.653381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.653601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d490 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.370 [2024-11-19 18:22:25.654490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.654619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d960 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.371 [2024-11-19 18:22:25.655855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.655947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3e320 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.662153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7f20 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.662289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9c9f0 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.662375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa2420 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.662460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa3810 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.662550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5cb0 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.662635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed1180 is same with the state(6) to be set 00:23:24.372 [2024-11-19 18:22:25.662719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.372 [2024-11-19 18:22:25.662760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.372 [2024-11-19 18:22:25.662768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.373 [2024-11-19 18:22:25.662775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.662782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9afa0 is same with the state(6) to be set 00:23:24.373 [2024-11-19 18:22:25.662806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.373 [2024-11-19 18:22:25.662814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.662823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.373 [2024-11-19 18:22:25.662831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.662839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.373 [2024-11-19 18:22:25.662846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.662855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.373 [2024-11-19 18:22:25.662862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.662869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bd610 is same with the state(6) to be set 00:23:24.373 [2024-11-19 18:22:25.662891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.373 [2024-11-19 18:22:25.662899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.662907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.373 [2024-11-19 18:22:25.662915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.662923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.373 [2024-11-19 18:22:25.662930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.662938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.373 [2024-11-19 18:22:25.662945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.662952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf19d00 is same with the state(6) to be set 00:23:24.373 [2024-11-19 18:22:25.663251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.373 [2024-11-19 18:22:25.663760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.373 [2024-11-19 18:22:25.663769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.663988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.663995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea9fc0 is same with the state(6) to be set 00:23:24.374 [2024-11-19 18:22:25.664536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.374 [2024-11-19 18:22:25.664596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.374 [2024-11-19 18:22:25.664603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.664990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.664997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.665006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.665013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.665022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.665029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.665039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.665047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.665056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.665064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.665073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.665080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.665091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.665099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.665108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.665115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.375 [2024-11-19 18:22:25.665124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.375 [2024-11-19 18:22:25.666929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.375 [2024-11-19 18:22:25.666943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.375 [2024-11-19 18:22:25.666948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.375 [2024-11-19 18:22:25.666952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.375 [2024-11-19 18:22:25.666960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.375 [2024-11-19 18:22:25.666964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.375 [2024-11-19 18:22:25.666969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.375 [2024-11-19 18:22:25.666973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.666978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.666982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.666986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.666991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.666996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.667229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a4e0 is same with the state(6) to be set 00:23:24.376 [2024-11-19 18:22:25.680369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.376 [2024-11-19 18:22:25.680638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.376 [2024-11-19 18:22:25.680647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.680908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.680915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.681074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.377 [2024-11-19 18:22:25.681088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.681097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.377 [2024-11-19 18:22:25.681105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.681114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.377 [2024-11-19 18:22:25.681122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.681130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.377 [2024-11-19 18:22:25.681137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.681146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec310 is same with the state(6) to be set 00:23:24.377 [2024-11-19 18:22:25.681181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef7f20 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.681199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9c9f0 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.681217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa2420 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.681234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa3810 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.681253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5cb0 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.681271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed1180 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.681288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9afa0 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.681303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bd610 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.681321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf19d00 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.684480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:24.377 [2024-11-19 18:22:25.684513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:24.377 [2024-11-19 18:22:25.685721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.377 [2024-11-19 18:22:25.685746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef7f20 with addr=10.0.0.2, port=4420 00:23:24.377 [2024-11-19 18:22:25.685757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7f20 is same with the state(6) to be set 00:23:24.377 [2024-11-19 18:22:25.686099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.377 [2024-11-19 18:22:25.686109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf19d00 with addr=10.0.0.2, port=4420 00:23:24.377 [2024-11-19 18:22:25.686116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf19d00 is same with the state(6) to be set 00:23:24.377 [2024-11-19 18:22:25.686171] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.377 [2024-11-19 18:22:25.686212] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.377 [2024-11-19 18:22:25.686250] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.377 [2024-11-19 18:22:25.686288] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.377 [2024-11-19 18:22:25.686326] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.377 [2024-11-19 18:22:25.686365] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.377 [2024-11-19 18:22:25.686478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef7f20 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.686492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf19d00 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.686596] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.377 [2024-11-19 18:22:25.686633] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.377 [2024-11-19 18:22:25.686652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:24.377 [2024-11-19 18:22:25.686660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:24.377 [2024-11-19 18:22:25.686670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:24.377 [2024-11-19 18:22:25.686679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:24.377 [2024-11-19 18:22:25.686688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:24.377 [2024-11-19 18:22:25.686695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:24.377 [2024-11-19 18:22:25.686703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:24.377 [2024-11-19 18:22:25.686709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:24.377 [2024-11-19 18:22:25.691049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeec310 (9): Bad file descriptor 00:23:24.377 [2024-11-19 18:22:25.691227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.691241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.691256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.691265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.377 [2024-11-19 18:22:25.691275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.377 [2024-11-19 18:22:25.691284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.378 [2024-11-19 18:22:25.691972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.378 [2024-11-19 18:22:25.691983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.691990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.692394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.692402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf937d0 is same with the state(6) to be set 00:23:24.379 [2024-11-19 18:22:25.693743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.693983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.693993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.694001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.694011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.694018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.694028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.379 [2024-11-19 18:22:25.694036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.379 [2024-11-19 18:22:25.694046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.380 [2024-11-19 18:22:25.694639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.380 [2024-11-19 18:22:25.694647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.694917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.694925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca9aa0 is same with the state(6) to be set 00:23:24.381 [2024-11-19 18:22:25.696258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.381 [2024-11-19 18:22:25.696700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.381 [2024-11-19 18:22:25.696708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.696987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.696994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.382 [2024-11-19 18:22:25.697388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.382 [2024-11-19 18:22:25.697396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.697406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.697413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.697423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.697431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.697440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcaad60 is same with the state(6) to be set 00:23:24.383 [2024-11-19 18:22:25.698770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.698988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.698997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.383 [2024-11-19 18:22:25.699440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.383 [2024-11-19 18:22:25.699447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.699917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.699926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea5fa0 is same with the state(6) to be set 00:23:24.384 [2024-11-19 18:22:25.701196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.701212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.701225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.701235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.701247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.701255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.701266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.701274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.701284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.701291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.701304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.384 [2024-11-19 18:22:25.701311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.384 [2024-11-19 18:22:25.701321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.385 [2024-11-19 18:22:25.701984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.385 [2024-11-19 18:22:25.701992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.702265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.702273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.706740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.706781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.706792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.706801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.706812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.706819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.706829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea74d0 is same with the state(6) to be set 00:23:24.386 [2024-11-19 18:22:25.708180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.386 [2024-11-19 18:22:25.708528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.386 [2024-11-19 18:22:25.708536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.708992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.708999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.387 [2024-11-19 18:22:25.709229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.387 [2024-11-19 18:22:25.709236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.709246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.709254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.709264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.709272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.709282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.709290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.709300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.709307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.709316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.709324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.709333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.709341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.709350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea8a00 is same with the state(6) to be set 00:23:24.388 [2024-11-19 18:22:25.710631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.710990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.710999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.388 [2024-11-19 18:22:25.711219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.388 [2024-11-19 18:22:25.711229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.389 [2024-11-19 18:22:25.711791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.389 [2024-11-19 18:22:25.711802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeab4f0 is same with the state(6) to be set 00:23:24.389 [2024-11-19 18:22:25.713075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:24.389 [2024-11-19 18:22:25.713092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:24.389 [2024-11-19 18:22:25.713103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:24.389 [2024-11-19 18:22:25.713114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:24.389 [2024-11-19 18:22:25.713213] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:24.389 [2024-11-19 18:22:25.713228] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:24.390 [2024-11-19 18:22:25.713240] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:24.390 [2024-11-19 18:22:25.713327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:24.390 [2024-11-19 18:22:25.713339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:24.390 [2024-11-19 18:22:25.713349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:24.390 [2024-11-19 18:22:25.713706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.390 [2024-11-19 18:22:25.713723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa5cb0 with addr=10.0.0.2, port=4420 00:23:24.390 [2024-11-19 18:22:25.713732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5cb0 is same with the state(6) to be set 00:23:24.390 [2024-11-19 18:22:25.714001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.390 [2024-11-19 18:22:25.714015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9c9f0 with addr=10.0.0.2, port=4420 00:23:24.390 [2024-11-19 18:22:25.714022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9c9f0 is same with the state(6) to be set 00:23:24.390 [2024-11-19 18:22:25.714193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.390 [2024-11-19 18:22:25.714211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa3810 with addr=10.0.0.2, port=4420 00:23:24.390 [2024-11-19 18:22:25.714220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa3810 is same with the state(6) to be set 00:23:24.390 [2024-11-19 18:22:25.714497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.390 [2024-11-19 18:22:25.714508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa2420 with addr=10.0.0.2, port=4420 00:23:24.390 [2024-11-19 18:22:25.714515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa2420 is same with the state(6) to be set 00:23:24.390 [2024-11-19 18:22:25.716366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:24.390 [2024-11-19 18:22:25.716382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:24.390 [2024-11-19 18:22:25.716722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.390 [2024-11-19 18:22:25.716736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed1180 with addr=10.0.0.2, port=4420 00:23:24.390 [2024-11-19 18:22:25.716744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed1180 is same with the state(6) to be set 00:23:24.390 [2024-11-19 18:22:25.717084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.390 [2024-11-19 18:22:25.717100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bd610 with addr=10.0.0.2, port=4420 00:23:24.390 [2024-11-19 18:22:25.717108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bd610 is same with the state(6) to be set 00:23:24.390 [2024-11-19 18:22:25.717464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.390 [2024-11-19 18:22:25.717475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9afa0 with addr=10.0.0.2, port=4420 00:23:24.390 [2024-11-19 18:22:25.717483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9afa0 is same with the state(6) to be set 00:23:24.390 [2024-11-19 18:22:25.717495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5cb0 (9): Bad file descriptor 00:23:24.390 [2024-11-19 18:22:25.717506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9c9f0 (9): Bad file descriptor 00:23:24.390 [2024-11-19 18:22:25.717516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa3810 (9): Bad file descriptor 00:23:24.390 [2024-11-19 18:22:25.717525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa2420 (9): Bad file descriptor 00:23:24.390 [2024-11-19 18:22:25.717619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.717983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.717991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.718007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.718014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.718024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.718032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.718041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.718049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.718059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.390 [2024-11-19 18:22:25.718067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.390 [2024-11-19 18:22:25.718076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.391 [2024-11-19 18:22:25.718767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.391 [2024-11-19 18:22:25.718775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaca20 is same with the state(6) to be set 00:23:24.391 task offset: 24576 on job bdev=Nvme7n1 fails 00:23:24.391 00:23:24.392 Latency(us) 00:23:24.392 [2024-11-19T17:22:25.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.392 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme1n1 : 0.96 200.84 12.55 66.95 0.00 236253.44 21080.75 248162.99 00:23:24.392 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme2n1 : 0.96 133.54 8.35 66.77 0.00 309509.69 19114.67 274377.39 00:23:24.392 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme3n1 ended in about 0.96 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme3n1 : 0.96 199.79 12.49 66.60 0.00 227857.07 15728.64 244667.73 00:23:24.392 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme4n1 ended in about 0.96 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme4n1 : 0.96 203.43 12.71 66.43 0.00 220245.28 15291.73 242920.11 00:23:24.392 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme5n1 ended in about 0.97 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme5n1 : 0.97 131.91 8.24 65.95 0.00 294237.01 21299.20 276125.01 00:23:24.392 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme6n1 ended in about 0.97 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme6n1 : 0.97 131.57 8.22 65.78 0.00 288557.51 17803.95 256901.12 00:23:24.392 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme7n1 ended in about 0.95 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme7n1 : 0.95 203.14 12.70 67.71 0.00 204551.89 19333.12 248162.99 00:23:24.392 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme8n1 ended in about 0.98 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme8n1 : 0.98 196.86 12.30 65.62 0.00 207218.13 17585.49 242920.11 00:23:24.392 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme9n1 ended in about 0.98 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme9n1 : 0.98 130.31 8.14 65.15 0.00 272242.35 22609.92 246415.36 00:23:24.392 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.392 Job: Nvme10n1 ended in about 0.95 seconds with error 00:23:24.392 Verification LBA range: start 0x0 length 0x400 00:23:24.392 Nvme10n1 : 0.95 202.86 12.68 67.62 0.00 190391.68 14308.69 234181.97 00:23:24.392 [2024-11-19T17:22:25.863Z] =================================================================================================================== 00:23:24.392 [2024-11-19T17:22:25.863Z] Total : 1734.25 108.39 664.59 0.00 239957.71 14308.69 276125.01 00:23:24.392 [2024-11-19 18:22:25.744964] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:24.392 [2024-11-19 18:22:25.745002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:24.392 [2024-11-19 18:22:25.745411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.392 [2024-11-19 18:22:25.745429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf19d00 with addr=10.0.0.2, port=4420 00:23:24.392 [2024-11-19 18:22:25.745439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf19d00 is same with the state(6) to be set 00:23:24.392 [2024-11-19 18:22:25.745761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.392 [2024-11-19 18:22:25.745771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef7f20 with addr=10.0.0.2, port=4420 00:23:24.392 [2024-11-19 18:22:25.745779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7f20 is same with the state(6) to be set 00:23:24.392 [2024-11-19 18:22:25.745791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed1180 (9): Bad file descriptor 00:23:24.392 [2024-11-19 18:22:25.745804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bd610 (9): Bad file descriptor 00:23:24.392 [2024-11-19 18:22:25.745814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9afa0 (9): Bad file descriptor 00:23:24.392 [2024-11-19 18:22:25.745823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:24.392 [2024-11-19 18:22:25.745830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:24.392 [2024-11-19 18:22:25.745840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:24.392 [2024-11-19 18:22:25.745850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:24.392 [2024-11-19 18:22:25.745859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:24.392 [2024-11-19 18:22:25.745866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:24.392 [2024-11-19 18:22:25.745873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:24.392 [2024-11-19 18:22:25.745880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:24.392 [2024-11-19 18:22:25.745888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:24.392 [2024-11-19 18:22:25.745894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:24.392 [2024-11-19 18:22:25.745907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:24.392 [2024-11-19 18:22:25.745913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:24.392 [2024-11-19 18:22:25.745922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:24.392 [2024-11-19 18:22:25.745930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:24.392 [2024-11-19 18:22:25.745937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:24.392 [2024-11-19 18:22:25.745944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:24.392 [2024-11-19 18:22:25.746244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.392 [2024-11-19 18:22:25.746259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeec310 with addr=10.0.0.2, port=4420 00:23:24.392 [2024-11-19 18:22:25.746268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec310 is same with the state(6) to be set 00:23:24.392 [2024-11-19 18:22:25.746277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf19d00 (9): Bad file descriptor 00:23:24.392 [2024-11-19 18:22:25.746287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef7f20 (9): Bad file descriptor 00:23:24.392 [2024-11-19 18:22:25.746297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:24.392 [2024-11-19 18:22:25.746304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:24.392 [2024-11-19 18:22:25.746311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:24.392 [2024-11-19 18:22:25.746319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:24.392 [2024-11-19 18:22:25.746327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:24.392 [2024-11-19 18:22:25.746333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:24.392 [2024-11-19 18:22:25.746340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:24.392 [2024-11-19 18:22:25.746347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:24.392 [2024-11-19 18:22:25.746355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:24.392 [2024-11-19 18:22:25.746361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:24.392 [2024-11-19 18:22:25.746368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:24.392 [2024-11-19 18:22:25.746375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:24.392 [2024-11-19 18:22:25.746442] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:24.392 [2024-11-19 18:22:25.746455] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:24.392 [2024-11-19 18:22:25.746781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeec310 (9): Bad file descriptor 00:23:24.392 [2024-11-19 18:22:25.746793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:24.392 [2024-11-19 18:22:25.746800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:24.392 [2024-11-19 18:22:25.746808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:24.392 [2024-11-19 18:22:25.746818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:24.392 [2024-11-19 18:22:25.746826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:24.392 [2024-11-19 18:22:25.746832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:24.392 [2024-11-19 18:22:25.746840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:24.392 [2024-11-19 18:22:25.746846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:24.392 [2024-11-19 18:22:25.746886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:24.392 [2024-11-19 18:22:25.746897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:24.392 [2024-11-19 18:22:25.746906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:24.392 [2024-11-19 18:22:25.746916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:24.393 [2024-11-19 18:22:25.746926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:24.393 [2024-11-19 18:22:25.746935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:24.393 [2024-11-19 18:22:25.746944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:24.393 [2024-11-19 18:22:25.746994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:24.393 [2024-11-19 18:22:25.747002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:24.393 [2024-11-19 18:22:25.747009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:24.393 [2024-11-19 18:22:25.747016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:24.393 [2024-11-19 18:22:25.747392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.393 [2024-11-19 18:22:25.747406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa2420 with addr=10.0.0.2, port=4420 00:23:24.393 [2024-11-19 18:22:25.747414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa2420 is same with the state(6) to be set 00:23:24.393 [2024-11-19 18:22:25.747742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.393 [2024-11-19 18:22:25.747754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa3810 with addr=10.0.0.2, port=4420 00:23:24.393 [2024-11-19 18:22:25.747761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa3810 is same with the state(6) to be set 00:23:24.393 [2024-11-19 18:22:25.748067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.393 [2024-11-19 18:22:25.748078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9c9f0 with addr=10.0.0.2, port=4420 00:23:24.393 [2024-11-19 18:22:25.748087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9c9f0 is same with the state(6) to be set 00:23:24.393 [2024-11-19 18:22:25.748267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.393 [2024-11-19 18:22:25.748278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa5cb0 with addr=10.0.0.2, port=4420 00:23:24.393 [2024-11-19 18:22:25.748286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5cb0 is same with the state(6) to be set 00:23:24.393 [2024-11-19 18:22:25.748618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.393 [2024-11-19 18:22:25.748629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9afa0 with addr=10.0.0.2, port=4420 00:23:24.393 [2024-11-19 18:22:25.748641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9afa0 is same with the state(6) to be set 00:23:24.393 [2024-11-19 18:22:25.748961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.393 [2024-11-19 18:22:25.748972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bd610 with addr=10.0.0.2, port=4420 00:23:24.393 [2024-11-19 18:22:25.748980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bd610 is same with the state(6) to be set 00:23:24.393 [2024-11-19 18:22:25.749319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.393 [2024-11-19 18:22:25.749330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed1180 with addr=10.0.0.2, port=4420 00:23:24.393 [2024-11-19 18:22:25.749337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed1180 is same with the state(6) to be set 00:23:24.393 [2024-11-19 18:22:25.749369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa2420 (9): Bad file descriptor 00:23:24.393 [2024-11-19 18:22:25.749379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa3810 (9): Bad file descriptor 00:23:24.393 [2024-11-19 18:22:25.749389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9c9f0 (9): Bad file descriptor 00:23:24.393 [2024-11-19 18:22:25.749398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5cb0 (9): Bad file descriptor 00:23:24.393 [2024-11-19 18:22:25.749408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9afa0 (9): Bad file descriptor 00:23:24.393 [2024-11-19 18:22:25.749418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bd610 (9): Bad file descriptor 00:23:24.393 [2024-11-19 18:22:25.749428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed1180 (9): Bad file descriptor 00:23:24.393 [2024-11-19 18:22:25.749458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:24.393 [2024-11-19 18:22:25.749465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:24.393 [2024-11-19 18:22:25.749472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:24.393 [2024-11-19 18:22:25.749479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:24.393 [2024-11-19 18:22:25.749487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:24.393 [2024-11-19 18:22:25.749494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:24.393 [2024-11-19 18:22:25.749501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:24.393 [2024-11-19 18:22:25.749507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:24.393 [2024-11-19 18:22:25.749515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:24.393 [2024-11-19 18:22:25.749521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:24.393 [2024-11-19 18:22:25.749528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:24.393 [2024-11-19 18:22:25.749535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:24.393 [2024-11-19 18:22:25.749543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:24.393 [2024-11-19 18:22:25.749549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:24.393 [2024-11-19 18:22:25.749559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:24.393 [2024-11-19 18:22:25.749566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:24.393 [2024-11-19 18:22:25.749573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:24.393 [2024-11-19 18:22:25.749580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:24.393 [2024-11-19 18:22:25.749586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:24.393 [2024-11-19 18:22:25.749593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:24.393 [2024-11-19 18:22:25.749600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:24.393 [2024-11-19 18:22:25.749607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:24.393 [2024-11-19 18:22:25.749614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:24.393 [2024-11-19 18:22:25.749620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:24.393 [2024-11-19 18:22:25.749628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:24.393 [2024-11-19 18:22:25.749634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:24.393 [2024-11-19 18:22:25.749641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:24.393 [2024-11-19 18:22:25.749647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:24.654 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2056033 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2056033 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2056033 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:25.597 rmmod nvme_tcp 00:23:25.597 rmmod nvme_fabrics 00:23:25.597 rmmod nvme_keyring 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2055814 ']' 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2055814 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2055814 ']' 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2055814 00:23:25.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2055814) - No such process 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2055814 is not found' 00:23:25.597 Process with pid 2055814 is not found 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:25.597 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:25.597 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:25.597 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:25.597 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.597 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.597 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:28.140 00:23:28.140 real 0m7.770s 00:23:28.140 user 0m19.091s 00:23:28.140 sys 0m1.240s 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.140 ************************************ 00:23:28.140 END TEST nvmf_shutdown_tc3 00:23:28.140 ************************************ 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:28.140 ************************************ 00:23:28.140 START TEST nvmf_shutdown_tc4 00:23:28.140 ************************************ 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.140 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:28.141 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:28.141 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:28.141 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:28.141 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.141 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:23:28.141 00:23:28.142 --- 10.0.0.2 ping statistics --- 00:23:28.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.142 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:23:28.142 00:23:28.142 --- 10.0.0.1 ping statistics --- 00:23:28.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.142 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2057487 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2057487 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2057487 ']' 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.142 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:28.142 [2024-11-19 18:22:29.602785] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:28.142 [2024-11-19 18:22:29.602835] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.402 [2024-11-19 18:22:29.693449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.402 [2024-11-19 18:22:29.725689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.402 [2024-11-19 18:22:29.725719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.402 [2024-11-19 18:22:29.725725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.402 [2024-11-19 18:22:29.725730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.402 [2024-11-19 18:22:29.725734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.402 [2024-11-19 18:22:29.727019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.402 [2024-11-19 18:22:29.727198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.402 [2024-11-19 18:22:29.727325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.402 [2024-11-19 18:22:29.727326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.971 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.971 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:28.971 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.971 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.971 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:28.971 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.971 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.971 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.971 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:29.231 [2024-11-19 18:22:30.445037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.231 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:29.231 Malloc1 00:23:29.231 [2024-11-19 18:22:30.563425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.231 Malloc2 00:23:29.231 Malloc3 00:23:29.231 Malloc4 00:23:29.231 Malloc5 00:23:29.507 Malloc6 00:23:29.507 Malloc7 00:23:29.507 Malloc8 00:23:29.507 Malloc9 00:23:29.507 Malloc10 00:23:29.507 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.507 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:29.507 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.507 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:29.507 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2057874 00:23:29.507 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:29.507 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:29.767 [2024-11-19 18:22:31.047087] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:35.054 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.054 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2057487 00:23:35.054 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2057487 ']' 00:23:35.054 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2057487 00:23:35.054 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:35.054 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.054 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2057487 00:23:35.054 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.054 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.054 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2057487' 00:23:35.054 killing process with pid 2057487 00:23:35.054 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2057487 00:23:35.054 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2057487 00:23:35.054 [2024-11-19 18:22:36.047038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a89d0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a89d0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a89d0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a89d0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8ea0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8ea0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8ea0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8ea0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8ea0 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.047634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0cb0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0cb0 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.047663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0cb0 is same with the state(6) to be set 00:23:35.054 starting I/O failed: -6 00:23:35.054 [2024-11-19 18:22:36.047668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0cb0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0cb0 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.047679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0cb0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0cb0 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0cb0 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.047694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d0cb0 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.047881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8500 is same with tWrite completed with error (sct=0, sc=8) 00:23:35.054 he state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8500 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.047910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8500 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.047915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8500 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.048097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with tWrite completed with error (sct=0, sc=8) 00:23:35.054 he state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.048110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.048116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.048121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.048127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.048132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with the state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.048137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with tWrite completed with error (sct=0, sc=8) 00:23:35.054 he state(6) to be set 00:23:35.054 [2024-11-19 18:22:36.048143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with the state(6) to be set 00:23:35.054 starting I/O failed: -6 00:23:35.054 [2024-11-19 18:22:36.048148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.048153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7690 is same with the state(6) to be set 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 [2024-11-19 18:22:36.048270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.054 Write completed with error (sct=0, sc=8) 00:23:35.054 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 [2024-11-19 18:22:36.048528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8030 is same with the state(6) to be set 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 [2024-11-19 18:22:36.048539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8030 is same with the state(6) to be set 00:23:35.055 starting I/O failed: -6 00:23:35.055 [2024-11-19 18:22:36.048546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8030 is same with the state(6) to be set 00:23:35.055 [2024-11-19 18:22:36.048551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8030 is same with the state(6) to be set 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 [2024-11-19 18:22:36.048556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8030 is same with the state(6) to be set 00:23:35.055 [2024-11-19 18:22:36.048561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8030 is same with the state(6) to be set 00:23:35.055 starting I/O failed: -6 00:23:35.055 [2024-11-19 18:22:36.048566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8030 is same with the state(6) to be set 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 [2024-11-19 18:22:36.048571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8030 is same with the state(6) to be set 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 [2024-11-19 18:22:36.048835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with the state(6) to be set 00:23:35.055 [2024-11-19 18:22:36.048848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with tWrite completed with error (sct=0, sc=8) 00:23:35.055 he state(6) to be set 00:23:35.055 [2024-11-19 18:22:36.048855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with the state(6) to be set 00:23:35.055 [2024-11-19 18:22:36.048860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with the state(6) to be set 00:23:35.055 [2024-11-19 18:22:36.048865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with the state(6) to be set 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 [2024-11-19 18:22:36.048870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with the state(6) to be set 00:23:35.055 starting I/O failed: -6 00:23:35.055 [2024-11-19 18:22:36.048875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with the state(6) to be set 00:23:35.055 [2024-11-19 18:22:36.048880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with the state(6) to be set 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 [2024-11-19 18:22:36.048885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with the state(6) to be set 00:23:35.055 starting I/O failed: -6 00:23:35.055 [2024-11-19 18:22:36.048890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a71c0 is same with the state(6) to be set 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 [2024-11-19 18:22:36.049117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 [2024-11-19 18:22:36.050044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.055 Write completed with error (sct=0, sc=8) 00:23:35.055 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 [2024-11-19 18:22:36.051310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.056 NVMe io qpair process completion error 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 [2024-11-19 18:22:36.052245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.056 starting I/O failed: -6 00:23:35.056 starting I/O failed: -6 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 starting I/O failed: -6 00:23:35.056 Write completed with error (sct=0, sc=8) 00:23:35.056 [2024-11-19 18:22:36.053192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.056 starting I/O failed: -6 00:23:35.056 starting I/O failed: -6 00:23:35.056 starting I/O failed: -6 00:23:35.056 starting I/O failed: -6 00:23:35.056 starting I/O failed: -6 00:23:35.056 starting I/O failed: -6 00:23:35.056 starting I/O failed: -6 00:23:35.056 starting I/O failed: -6 00:23:35.057 starting I/O failed: -6 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 starting I/O failed: -6 00:23:35.057 starting I/O failed: -6 00:23:35.057 starting I/O failed: -6 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 [2024-11-19 18:22:36.056595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.057 NVMe io qpair process completion error 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 Write completed with error (sct=0, sc=8) 00:23:35.057 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 [2024-11-19 18:22:36.057866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 [2024-11-19 18:22:36.058688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 [2024-11-19 18:22:36.059600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.058 Write completed with error (sct=0, sc=8) 00:23:35.058 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 [2024-11-19 18:22:36.062539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:35.059 NVMe io qpair process completion error 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 starting I/O failed: -6 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.059 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 [2024-11-19 18:22:36.063724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 [2024-11-19 18:22:36.064548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 [2024-11-19 18:22:36.065496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.060 Write completed with error (sct=0, sc=8) 00:23:35.060 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 [2024-11-19 18:22:36.067403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:35.061 NVMe io qpair process completion error 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 [2024-11-19 18:22:36.068669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 [2024-11-19 18:22:36.069487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.061 starting I/O failed: -6 00:23:35.061 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 [2024-11-19 18:22:36.070421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 [2024-11-19 18:22:36.072811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.062 NVMe io qpair process completion error 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 starting I/O failed: -6 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.062 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 [2024-11-19 18:22:36.074090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 [2024-11-19 18:22:36.074969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 [2024-11-19 18:22:36.075882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.063 starting I/O failed: -6 00:23:35.063 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 [2024-11-19 18:22:36.077768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:35.064 NVMe io qpair process completion error 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 [2024-11-19 18:22:36.079067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.064 starting I/O failed: -6 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.064 starting I/O failed: -6 00:23:35.064 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 [2024-11-19 18:22:36.079913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 [2024-11-19 18:22:36.080847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.065 Write completed with error (sct=0, sc=8) 00:23:35.065 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 [2024-11-19 18:22:36.082502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:35.066 NVMe io qpair process completion error 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 [2024-11-19 18:22:36.083786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 [2024-11-19 18:22:36.084603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 [2024-11-19 18:22:36.085548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.066 Write completed with error (sct=0, sc=8) 00:23:35.066 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 [2024-11-19 18:22:36.088262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:35.067 NVMe io qpair process completion error 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 [2024-11-19 18:22:36.089428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.067 starting I/O failed: -6 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.067 starting I/O failed: -6 00:23:35.067 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 [2024-11-19 18:22:36.090294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 [2024-11-19 18:22:36.091234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.068 Write completed with error (sct=0, sc=8) 00:23:35.068 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 [2024-11-19 18:22:36.092881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.069 NVMe io qpair process completion error 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 [2024-11-19 18:22:36.093981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 [2024-11-19 18:22:36.094862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.069 Write completed with error (sct=0, sc=8) 00:23:35.069 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 [2024-11-19 18:22:36.095824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 Write completed with error (sct=0, sc=8) 00:23:35.070 starting I/O failed: -6 00:23:35.070 [2024-11-19 18:22:36.099763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:35.070 NVMe io qpair process completion error 00:23:35.070 Initializing NVMe Controllers 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.070 Controller IO queue size 128, less than required. 00:23:35.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:35.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:35.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:35.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:35.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:35.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:35.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:35.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:35.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:35.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:35.071 Initialization complete. Launching workers. 00:23:35.071 ======================================================== 00:23:35.071 Latency(us) 00:23:35.071 Device Information : IOPS MiB/s Average min max 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1874.67 80.55 68296.99 868.87 121368.57 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1862.09 80.01 68792.54 910.46 150210.96 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1878.24 80.71 68229.75 657.55 150811.21 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1871.31 80.41 68504.49 832.23 132103.97 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1855.37 79.72 69130.27 721.36 121549.40 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1868.80 80.30 68655.19 889.35 135528.46 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1859.57 79.90 68292.19 685.89 120655.00 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1853.49 79.64 68539.80 674.20 122703.70 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1854.12 79.67 68543.72 820.50 121851.45 00:23:35.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1886.00 81.04 67424.96 678.34 121765.08 00:23:35.071 ======================================================== 00:23:35.071 Total : 18663.64 801.95 68439.20 657.55 150811.21 00:23:35.071 00:23:35.071 [2024-11-19 18:22:36.103944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7410 is same with the state(6) to be set 00:23:35.071 [2024-11-19 18:22:36.103988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7740 is same with the state(6) to be set 00:23:35.071 [2024-11-19 18:22:36.104019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6560 is same with the state(6) to be set 00:23:35.071 [2024-11-19 18:22:36.104047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7a70 is same with the state(6) to be set 00:23:35.071 [2024-11-19 18:22:36.104075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6bc0 is same with the state(6) to be set 00:23:35.071 [2024-11-19 18:22:36.104104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6890 is same with the state(6) to be set 00:23:35.071 [2024-11-19 18:22:36.104133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8ae0 is same with the state(6) to be set 00:23:35.071 [2024-11-19 18:22:36.104168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a6ef0 is same with the state(6) to be set 00:23:35.071 [2024-11-19 18:22:36.104197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8900 is same with the state(6) to be set 00:23:35.071 [2024-11-19 18:22:36.104225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a8720 is same with the state(6) to be set 00:23:35.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:35.071 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:36.013 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2057874 00:23:36.013 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2057874 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2057874 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.014 rmmod nvme_tcp 00:23:36.014 rmmod nvme_fabrics 00:23:36.014 rmmod nvme_keyring 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2057487 ']' 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2057487 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2057487 ']' 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2057487 00:23:36.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2057487) - No such process 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2057487 is not found' 00:23:36.014 Process with pid 2057487 is not found 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.014 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.559 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.559 00:23:38.559 real 0m10.292s 00:23:38.559 user 0m27.980s 00:23:38.559 sys 0m3.966s 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 ************************************ 00:23:38.560 END TEST nvmf_shutdown_tc4 00:23:38.560 ************************************ 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:38.560 00:23:38.560 real 0m43.726s 00:23:38.560 user 1m46.776s 00:23:38.560 sys 0m13.785s 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 ************************************ 00:23:38.560 END TEST nvmf_shutdown 00:23:38.560 ************************************ 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 ************************************ 00:23:38.560 START TEST nvmf_nsid 00:23:38.560 ************************************ 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:38.560 * Looking for test storage... 00:23:38.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.560 --rc genhtml_branch_coverage=1 00:23:38.560 --rc genhtml_function_coverage=1 00:23:38.560 --rc genhtml_legend=1 00:23:38.560 --rc geninfo_all_blocks=1 00:23:38.560 --rc geninfo_unexecuted_blocks=1 00:23:38.560 00:23:38.560 ' 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.560 --rc genhtml_branch_coverage=1 00:23:38.560 --rc genhtml_function_coverage=1 00:23:38.560 --rc genhtml_legend=1 00:23:38.560 --rc geninfo_all_blocks=1 00:23:38.560 --rc geninfo_unexecuted_blocks=1 00:23:38.560 00:23:38.560 ' 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.560 --rc genhtml_branch_coverage=1 00:23:38.560 --rc genhtml_function_coverage=1 00:23:38.560 --rc genhtml_legend=1 00:23:38.560 --rc geninfo_all_blocks=1 00:23:38.560 --rc geninfo_unexecuted_blocks=1 00:23:38.560 00:23:38.560 ' 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:38.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.560 --rc genhtml_branch_coverage=1 00:23:38.560 --rc genhtml_function_coverage=1 00:23:38.560 --rc genhtml_legend=1 00:23:38.560 --rc geninfo_all_blocks=1 00:23:38.560 --rc geninfo_unexecuted_blocks=1 00:23:38.560 00:23:38.560 ' 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.560 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.561 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:46.700 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:46.700 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.700 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:46.701 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:46.701 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.701 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:23:46.701 00:23:46.701 --- 10.0.0.2 ping statistics --- 00:23:46.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.701 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:23:46.701 00:23:46.701 --- 10.0.0.1 ping statistics --- 00:23:46.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.701 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2063219 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2063219 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2063219 ']' 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.701 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:46.701 [2024-11-19 18:22:47.287442] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:46.701 [2024-11-19 18:22:47.287509] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.701 [2024-11-19 18:22:47.386683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.701 [2024-11-19 18:22:47.437179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.701 [2024-11-19 18:22:47.437227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.701 [2024-11-19 18:22:47.437236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.701 [2024-11-19 18:22:47.437243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.701 [2024-11-19 18:22:47.437249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.701 [2024-11-19 18:22:47.437977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2063277 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.701 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=63e26243-0b71-4f63-ba05-593a7d9ae82f 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=19e0e3e7-5ea4-41cf-b237-312228c00181 00:23:46.702 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c97ab5ef-0abe-4d56-9811-c75ba6e7f563 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:46.962 null0 00:23:46.962 null1 00:23:46.962 null2 00:23:46.962 [2024-11-19 18:22:48.200789] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:23:46.962 [2024-11-19 18:22:48.200861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2063277 ] 00:23:46.962 [2024-11-19 18:22:48.203830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.962 [2024-11-19 18:22:48.228085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2063277 /var/tmp/tgt2.sock 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2063277 ']' 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:46.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.962 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:46.962 [2024-11-19 18:22:48.292116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.962 [2024-11-19 18:22:48.346033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.222 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.222 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:47.222 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:47.482 [2024-11-19 18:22:48.902489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.482 [2024-11-19 18:22:48.918671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:47.482 nvme0n1 nvme0n2 00:23:47.482 nvme1n1 00:23:47.741 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:47.741 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:47.741 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:49.122 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:49.123 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:49.123 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 63e26243-0b71-4f63-ba05-593a7d9ae82f 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=63e262430b714f63ba05593a7d9ae82f 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 63E262430B714F63BA05593A7D9AE82F 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 63E262430B714F63BA05593A7D9AE82F == \6\3\E\2\6\2\4\3\0\B\7\1\4\F\6\3\B\A\0\5\5\9\3\A\7\D\9\A\E\8\2\F ]] 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 19e0e3e7-5ea4-41cf-b237-312228c00181 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:50.063 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:50.323 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=19e0e3e75ea441cfb237312228c00181 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 19E0E3E75EA441CFB237312228C00181 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 19E0E3E75EA441CFB237312228C00181 == \1\9\E\0\E\3\E\7\5\E\A\4\4\1\C\F\B\2\3\7\3\1\2\2\2\8\C\0\0\1\8\1 ]] 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c97ab5ef-0abe-4d56-9811-c75ba6e7f563 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c97ab5ef0abe4d569811c75ba6e7f563 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C97AB5EF0ABE4D569811C75BA6E7F563 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C97AB5EF0ABE4D569811C75BA6E7F563 == \C\9\7\A\B\5\E\F\0\A\B\E\4\D\5\6\9\8\1\1\C\7\5\B\A\6\E\7\F\5\6\3 ]] 00:23:50.324 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2063277 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2063277 ']' 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2063277 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063277 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063277' 00:23:50.584 killing process with pid 2063277 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2063277 00:23:50.584 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2063277 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.843 rmmod nvme_tcp 00:23:50.843 rmmod nvme_fabrics 00:23:50.843 rmmod nvme_keyring 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2063219 ']' 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2063219 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2063219 ']' 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2063219 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063219 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.843 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.844 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063219' 00:23:50.844 killing process with pid 2063219 00:23:50.844 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2063219 00:23:50.844 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2063219 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.104 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.029 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:53.029 00:23:53.029 real 0m14.859s 00:23:53.029 user 0m11.385s 00:23:53.029 sys 0m6.780s 00:23:53.029 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.029 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:53.029 ************************************ 00:23:53.029 END TEST nvmf_nsid 00:23:53.029 ************************************ 00:23:53.029 18:22:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:53.029 00:23:53.029 real 13m3.229s 00:23:53.029 user 27m18.637s 00:23:53.029 sys 3m56.271s 00:23:53.029 18:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.029 18:22:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:53.029 ************************************ 00:23:53.029 END TEST nvmf_target_extra 00:23:53.029 ************************************ 00:23:53.289 18:22:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:53.289 18:22:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:53.289 18:22:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.289 18:22:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:53.289 ************************************ 00:23:53.289 START TEST nvmf_host 00:23:53.289 ************************************ 00:23:53.289 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:53.289 * Looking for test storage... 00:23:53.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:53.289 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:53.289 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:53.289 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:53.289 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:53.289 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.289 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.289 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.289 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.550 --rc genhtml_branch_coverage=1 00:23:53.550 --rc genhtml_function_coverage=1 00:23:53.550 --rc genhtml_legend=1 00:23:53.550 --rc geninfo_all_blocks=1 00:23:53.550 --rc geninfo_unexecuted_blocks=1 00:23:53.550 00:23:53.550 ' 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.550 --rc genhtml_branch_coverage=1 00:23:53.550 --rc genhtml_function_coverage=1 00:23:53.550 --rc genhtml_legend=1 00:23:53.550 --rc geninfo_all_blocks=1 00:23:53.550 --rc geninfo_unexecuted_blocks=1 00:23:53.550 00:23:53.550 ' 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.550 --rc genhtml_branch_coverage=1 00:23:53.550 --rc genhtml_function_coverage=1 00:23:53.550 --rc genhtml_legend=1 00:23:53.550 --rc geninfo_all_blocks=1 00:23:53.550 --rc geninfo_unexecuted_blocks=1 00:23:53.550 00:23:53.550 ' 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:53.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.550 --rc genhtml_branch_coverage=1 00:23:53.550 --rc genhtml_function_coverage=1 00:23:53.550 --rc genhtml_legend=1 00:23:53.550 --rc geninfo_all_blocks=1 00:23:53.550 --rc geninfo_unexecuted_blocks=1 00:23:53.550 00:23:53.550 ' 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.550 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.551 ************************************ 00:23:53.551 START TEST nvmf_multicontroller 00:23:53.551 ************************************ 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:53.551 * Looking for test storage... 00:23:53.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:53.551 18:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:53.812 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:53.812 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.812 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.813 --rc genhtml_branch_coverage=1 00:23:53.813 --rc genhtml_function_coverage=1 00:23:53.813 --rc genhtml_legend=1 00:23:53.813 --rc geninfo_all_blocks=1 00:23:53.813 --rc geninfo_unexecuted_blocks=1 00:23:53.813 00:23:53.813 ' 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.813 --rc genhtml_branch_coverage=1 00:23:53.813 --rc genhtml_function_coverage=1 00:23:53.813 --rc genhtml_legend=1 00:23:53.813 --rc geninfo_all_blocks=1 00:23:53.813 --rc geninfo_unexecuted_blocks=1 00:23:53.813 00:23:53.813 ' 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.813 --rc genhtml_branch_coverage=1 00:23:53.813 --rc genhtml_function_coverage=1 00:23:53.813 --rc genhtml_legend=1 00:23:53.813 --rc geninfo_all_blocks=1 00:23:53.813 --rc geninfo_unexecuted_blocks=1 00:23:53.813 00:23:53.813 ' 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:53.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.813 --rc genhtml_branch_coverage=1 00:23:53.813 --rc genhtml_function_coverage=1 00:23:53.813 --rc genhtml_legend=1 00:23:53.813 --rc geninfo_all_blocks=1 00:23:53.813 --rc geninfo_unexecuted_blocks=1 00:23:53.813 00:23:53.813 ' 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.813 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.814 18:22:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:01.949 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:01.949 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:01.949 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:01.949 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:01.949 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:01.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:24:01.950 00:24:01.950 --- 10.0.0.2 ping statistics --- 00:24:01.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.950 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:24:01.950 00:24:01.950 --- 10.0.0.1 ping statistics --- 00:24:01.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.950 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2068374 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2068374 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2068374 ']' 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.950 18:23:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.950 [2024-11-19 18:23:02.642287] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:24:01.950 [2024-11-19 18:23:02.642353] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.950 [2024-11-19 18:23:02.746782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:01.950 [2024-11-19 18:23:02.800281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.950 [2024-11-19 18:23:02.800332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.950 [2024-11-19 18:23:02.800340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.950 [2024-11-19 18:23:02.800347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.950 [2024-11-19 18:23:02.800354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.950 [2024-11-19 18:23:02.802203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.950 [2024-11-19 18:23:02.802424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.950 [2024-11-19 18:23:02.802426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.211 [2024-11-19 18:23:03.509806] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.211 Malloc0 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.211 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.212 [2024-11-19 18:23:03.583686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.212 [2024-11-19 18:23:03.595630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.212 Malloc1 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2068699 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2068699 /var/tmp/bdevperf.sock 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2068699 ']' 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.212 18:23:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.153 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.153 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:03.153 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:03.153 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.153 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.414 NVMe0n1 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.414 1 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.414 request: 00:24:03.414 { 00:24:03.414 "name": "NVMe0", 00:24:03.414 "trtype": "tcp", 00:24:03.414 "traddr": "10.0.0.2", 00:24:03.414 "adrfam": "ipv4", 00:24:03.414 "trsvcid": "4420", 00:24:03.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.414 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:03.414 "hostaddr": "10.0.0.1", 00:24:03.414 "prchk_reftag": false, 00:24:03.414 "prchk_guard": false, 00:24:03.414 "hdgst": false, 00:24:03.414 "ddgst": false, 00:24:03.414 "allow_unrecognized_csi": false, 00:24:03.414 "method": "bdev_nvme_attach_controller", 00:24:03.414 "req_id": 1 00:24:03.414 } 00:24:03.414 Got JSON-RPC error response 00:24:03.414 response: 00:24:03.414 { 00:24:03.414 "code": -114, 00:24:03.414 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:03.414 } 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.414 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.415 request: 00:24:03.415 { 00:24:03.415 "name": "NVMe0", 00:24:03.415 "trtype": "tcp", 00:24:03.415 "traddr": "10.0.0.2", 00:24:03.415 "adrfam": "ipv4", 00:24:03.415 "trsvcid": "4420", 00:24:03.415 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:03.415 "hostaddr": "10.0.0.1", 00:24:03.415 "prchk_reftag": false, 00:24:03.415 "prchk_guard": false, 00:24:03.415 "hdgst": false, 00:24:03.415 "ddgst": false, 00:24:03.415 "allow_unrecognized_csi": false, 00:24:03.415 "method": "bdev_nvme_attach_controller", 00:24:03.415 "req_id": 1 00:24:03.415 } 00:24:03.415 Got JSON-RPC error response 00:24:03.415 response: 00:24:03.415 { 00:24:03.415 "code": -114, 00:24:03.415 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:03.415 } 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.415 request: 00:24:03.415 { 00:24:03.415 "name": "NVMe0", 00:24:03.415 "trtype": "tcp", 00:24:03.415 "traddr": "10.0.0.2", 00:24:03.415 "adrfam": "ipv4", 00:24:03.415 "trsvcid": "4420", 00:24:03.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.415 "hostaddr": "10.0.0.1", 00:24:03.415 "prchk_reftag": false, 00:24:03.415 "prchk_guard": false, 00:24:03.415 "hdgst": false, 00:24:03.415 "ddgst": false, 00:24:03.415 "multipath": "disable", 00:24:03.415 "allow_unrecognized_csi": false, 00:24:03.415 "method": "bdev_nvme_attach_controller", 00:24:03.415 "req_id": 1 00:24:03.415 } 00:24:03.415 Got JSON-RPC error response 00:24:03.415 response: 00:24:03.415 { 00:24:03.415 "code": -114, 00:24:03.415 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:03.415 } 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.415 request: 00:24:03.415 { 00:24:03.415 "name": "NVMe0", 00:24:03.415 "trtype": "tcp", 00:24:03.415 "traddr": "10.0.0.2", 00:24:03.415 "adrfam": "ipv4", 00:24:03.415 "trsvcid": "4420", 00:24:03.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.415 "hostaddr": "10.0.0.1", 00:24:03.415 "prchk_reftag": false, 00:24:03.415 "prchk_guard": false, 00:24:03.415 "hdgst": false, 00:24:03.415 "ddgst": false, 00:24:03.415 "multipath": "failover", 00:24:03.415 "allow_unrecognized_csi": false, 00:24:03.415 "method": "bdev_nvme_attach_controller", 00:24:03.415 "req_id": 1 00:24:03.415 } 00:24:03.415 Got JSON-RPC error response 00:24:03.415 response: 00:24:03.415 { 00:24:03.415 "code": -114, 00:24:03.415 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:03.415 } 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.415 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.676 NVMe0n1 00:24:03.676 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.676 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.676 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.676 18:23:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.676 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.676 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:03.676 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.676 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.936 00:24:03.936 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.936 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.936 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:03.936 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.936 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.936 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.936 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:03.936 18:23:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.317 { 00:24:05.317 "results": [ 00:24:05.317 { 00:24:05.317 "job": "NVMe0n1", 00:24:05.317 "core_mask": "0x1", 00:24:05.317 "workload": "write", 00:24:05.317 "status": "finished", 00:24:05.317 "queue_depth": 128, 00:24:05.317 "io_size": 4096, 00:24:05.317 "runtime": 1.007784, 00:24:05.317 "iops": 26800.385796956492, 00:24:05.317 "mibps": 104.6890070193613, 00:24:05.317 "io_failed": 0, 00:24:05.317 "io_timeout": 0, 00:24:05.317 "avg_latency_us": 4767.19461117899, 00:24:05.317 "min_latency_us": 2075.306666666667, 00:24:05.317 "max_latency_us": 12888.746666666666 00:24:05.317 } 00:24:05.317 ], 00:24:05.317 "core_count": 1 00:24:05.317 } 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2068699 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2068699 ']' 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2068699 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2068699 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2068699' 00:24:05.317 killing process with pid 2068699 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2068699 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2068699 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:05.317 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:05.317 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:05.317 [2024-11-19 18:23:03.733855] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:24:05.317 [2024-11-19 18:23:03.733937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068699 ] 00:24:05.317 [2024-11-19 18:23:03.827761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.317 [2024-11-19 18:23:03.881350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.317 [2024-11-19 18:23:05.233510] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 8e897a0c-7b24-4c81-bd59-4d33a5f8912e already exists 00:24:05.317 [2024-11-19 18:23:05.233557] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:8e897a0c-7b24-4c81-bd59-4d33a5f8912e alias for bdev NVMe1n1 00:24:05.317 [2024-11-19 18:23:05.233568] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:05.317 Running I/O for 1 seconds... 00:24:05.317 26771.00 IOPS, 104.57 MiB/s 00:24:05.317 Latency(us) 00:24:05.317 [2024-11-19T17:23:06.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.317 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:05.317 NVMe0n1 : 1.01 26800.39 104.69 0.00 0.00 4767.19 2075.31 12888.75 00:24:05.317 [2024-11-19T17:23:06.789Z] =================================================================================================================== 00:24:05.318 [2024-11-19T17:23:06.789Z] Total : 26800.39 104.69 0.00 0.00 4767.19 2075.31 12888.75 00:24:05.318 Received shutdown signal, test time was about 1.000000 seconds 00:24:05.318 00:24:05.318 Latency(us) 00:24:05.318 [2024-11-19T17:23:06.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.318 [2024-11-19T17:23:06.789Z] =================================================================================================================== 00:24:05.318 [2024-11-19T17:23:06.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.318 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.318 rmmod nvme_tcp 00:24:05.318 rmmod nvme_fabrics 00:24:05.318 rmmod nvme_keyring 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2068374 ']' 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2068374 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2068374 ']' 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2068374 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2068374 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2068374' 00:24:05.318 killing process with pid 2068374 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2068374 00:24:05.318 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2068374 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.577 18:23:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.509 18:23:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:07.509 00:24:07.509 real 0m14.127s 00:24:07.509 user 0m17.731s 00:24:07.509 sys 0m6.497s 00:24:07.509 18:23:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.769 18:23:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.769 ************************************ 00:24:07.769 END TEST nvmf_multicontroller 00:24:07.769 ************************************ 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.769 ************************************ 00:24:07.769 START TEST nvmf_aer 00:24:07.769 ************************************ 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:07.769 * Looking for test storage... 00:24:07.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.769 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.030 --rc genhtml_branch_coverage=1 00:24:08.030 --rc genhtml_function_coverage=1 00:24:08.030 --rc genhtml_legend=1 00:24:08.030 --rc geninfo_all_blocks=1 00:24:08.030 --rc geninfo_unexecuted_blocks=1 00:24:08.030 00:24:08.030 ' 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.030 --rc genhtml_branch_coverage=1 00:24:08.030 --rc genhtml_function_coverage=1 00:24:08.030 --rc genhtml_legend=1 00:24:08.030 --rc geninfo_all_blocks=1 00:24:08.030 --rc geninfo_unexecuted_blocks=1 00:24:08.030 00:24:08.030 ' 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.030 --rc genhtml_branch_coverage=1 00:24:08.030 --rc genhtml_function_coverage=1 00:24:08.030 --rc genhtml_legend=1 00:24:08.030 --rc geninfo_all_blocks=1 00:24:08.030 --rc geninfo_unexecuted_blocks=1 00:24:08.030 00:24:08.030 ' 00:24:08.030 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.030 --rc genhtml_branch_coverage=1 00:24:08.030 --rc genhtml_function_coverage=1 00:24:08.030 --rc genhtml_legend=1 00:24:08.030 --rc geninfo_all_blocks=1 00:24:08.030 --rc geninfo_unexecuted_blocks=1 00:24:08.030 00:24:08.030 ' 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.031 18:23:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.171 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:16.172 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:16.172 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:16.172 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:16.172 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:24:16.172 00:24:16.172 --- 10.0.0.2 ping statistics --- 00:24:16.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.172 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:24:16.172 00:24:16.172 --- 10.0.0.1 ping statistics --- 00:24:16.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.172 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2073389 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2073389 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2073389 ']' 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.172 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.173 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.173 18:23:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.173 [2024-11-19 18:23:16.863346] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:24:16.173 [2024-11-19 18:23:16.863415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.173 [2024-11-19 18:23:16.962746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.173 [2024-11-19 18:23:17.016879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.173 [2024-11-19 18:23:17.016930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.173 [2024-11-19 18:23:17.016939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.173 [2024-11-19 18:23:17.016946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.173 [2024-11-19 18:23:17.016953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.173 [2024-11-19 18:23:17.019208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.173 [2024-11-19 18:23:17.019320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.173 [2024-11-19 18:23:17.019481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.173 [2024-11-19 18:23:17.019482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.435 [2024-11-19 18:23:17.723818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.435 Malloc0 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.435 [2024-11-19 18:23:17.805258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.435 [ 00:24:16.435 { 00:24:16.435 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:16.435 "subtype": "Discovery", 00:24:16.435 "listen_addresses": [], 00:24:16.435 "allow_any_host": true, 00:24:16.435 "hosts": [] 00:24:16.435 }, 00:24:16.435 { 00:24:16.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.435 "subtype": "NVMe", 00:24:16.435 "listen_addresses": [ 00:24:16.435 { 00:24:16.435 "trtype": "TCP", 00:24:16.435 "adrfam": "IPv4", 00:24:16.435 "traddr": "10.0.0.2", 00:24:16.435 "trsvcid": "4420" 00:24:16.435 } 00:24:16.435 ], 00:24:16.435 "allow_any_host": true, 00:24:16.435 "hosts": [], 00:24:16.435 "serial_number": "SPDK00000000000001", 00:24:16.435 "model_number": "SPDK bdev Controller", 00:24:16.435 "max_namespaces": 2, 00:24:16.435 "min_cntlid": 1, 00:24:16.435 "max_cntlid": 65519, 00:24:16.435 "namespaces": [ 00:24:16.435 { 00:24:16.435 "nsid": 1, 00:24:16.435 "bdev_name": "Malloc0", 00:24:16.435 "name": "Malloc0", 00:24:16.435 "nguid": "B059E584BC904896BB6D2D4F4515B78F", 00:24:16.435 "uuid": "b059e584-bc90-4896-bb6d-2d4f4515b78f" 00:24:16.435 } 00:24:16.435 ] 00:24:16.435 } 00:24:16.435 ] 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2073740 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:16.435 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:16.696 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.696 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:16.696 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:16.696 18:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.696 Malloc1 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.696 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.697 Asynchronous Event Request test 00:24:16.697 Attaching to 10.0.0.2 00:24:16.697 Attached to 10.0.0.2 00:24:16.697 Registering asynchronous event callbacks... 00:24:16.697 Starting namespace attribute notice tests for all controllers... 00:24:16.697 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:16.697 aer_cb - Changed Namespace 00:24:16.697 Cleaning up... 00:24:16.697 [ 00:24:16.697 { 00:24:16.697 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:16.697 "subtype": "Discovery", 00:24:16.697 "listen_addresses": [], 00:24:16.697 "allow_any_host": true, 00:24:16.697 "hosts": [] 00:24:16.697 }, 00:24:16.697 { 00:24:16.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.697 "subtype": "NVMe", 00:24:16.697 "listen_addresses": [ 00:24:16.697 { 00:24:16.697 "trtype": "TCP", 00:24:16.697 "adrfam": "IPv4", 00:24:16.697 "traddr": "10.0.0.2", 00:24:16.697 "trsvcid": "4420" 00:24:16.697 } 00:24:16.697 ], 00:24:16.697 "allow_any_host": true, 00:24:16.697 "hosts": [], 00:24:16.697 "serial_number": "SPDK00000000000001", 00:24:16.697 "model_number": "SPDK bdev Controller", 00:24:16.697 "max_namespaces": 2, 00:24:16.697 "min_cntlid": 1, 00:24:16.697 "max_cntlid": 65519, 00:24:16.697 "namespaces": [ 00:24:16.697 { 00:24:16.697 "nsid": 1, 00:24:16.697 "bdev_name": "Malloc0", 00:24:16.697 "name": "Malloc0", 00:24:16.697 "nguid": "B059E584BC904896BB6D2D4F4515B78F", 00:24:16.697 "uuid": "b059e584-bc90-4896-bb6d-2d4f4515b78f" 00:24:16.697 }, 00:24:16.697 { 00:24:16.697 "nsid": 2, 00:24:16.697 "bdev_name": "Malloc1", 00:24:16.697 "name": "Malloc1", 00:24:16.697 "nguid": "016FDC486FFE4A94B4F876602B1B7B03", 00:24:16.697 "uuid": "016fdc48-6ffe-4a94-b4f8-76602b1b7b03" 00:24:16.697 } 00:24:16.697 ] 00:24:16.697 } 00:24:16.697 ] 00:24:16.697 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.697 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2073740 00:24:16.697 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:16.697 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.697 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.697 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.697 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:16.697 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.697 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.958 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.958 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.958 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.958 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:16.958 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.958 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:16.958 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:16.958 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.958 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.959 rmmod nvme_tcp 00:24:16.959 rmmod nvme_fabrics 00:24:16.959 rmmod nvme_keyring 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2073389 ']' 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2073389 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2073389 ']' 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2073389 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2073389 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2073389' 00:24:16.959 killing process with pid 2073389 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2073389 00:24:16.959 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2073389 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.220 18:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.138 18:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.138 00:24:19.138 real 0m11.515s 00:24:19.138 user 0m8.085s 00:24:19.138 sys 0m6.174s 00:24:19.138 18:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.138 18:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:19.138 ************************************ 00:24:19.138 END TEST nvmf_aer 00:24:19.138 ************************************ 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.400 ************************************ 00:24:19.400 START TEST nvmf_async_init 00:24:19.400 ************************************ 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:19.400 * Looking for test storage... 00:24:19.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:19.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.400 --rc genhtml_branch_coverage=1 00:24:19.400 --rc genhtml_function_coverage=1 00:24:19.400 --rc genhtml_legend=1 00:24:19.400 --rc geninfo_all_blocks=1 00:24:19.400 --rc geninfo_unexecuted_blocks=1 00:24:19.400 00:24:19.400 ' 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:19.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.400 --rc genhtml_branch_coverage=1 00:24:19.400 --rc genhtml_function_coverage=1 00:24:19.400 --rc genhtml_legend=1 00:24:19.400 --rc geninfo_all_blocks=1 00:24:19.400 --rc geninfo_unexecuted_blocks=1 00:24:19.400 00:24:19.400 ' 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:19.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.400 --rc genhtml_branch_coverage=1 00:24:19.400 --rc genhtml_function_coverage=1 00:24:19.400 --rc genhtml_legend=1 00:24:19.400 --rc geninfo_all_blocks=1 00:24:19.400 --rc geninfo_unexecuted_blocks=1 00:24:19.400 00:24:19.400 ' 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:19.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.400 --rc genhtml_branch_coverage=1 00:24:19.400 --rc genhtml_function_coverage=1 00:24:19.400 --rc genhtml_legend=1 00:24:19.400 --rc geninfo_all_blocks=1 00:24:19.400 --rc geninfo_unexecuted_blocks=1 00:24:19.400 00:24:19.400 ' 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.400 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a7484db2bb274f849d0a99e9032a2e94 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:19.662 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:19.663 18:23:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:27.990 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:27.990 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:27.990 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:27.990 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.990 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:24:27.991 00:24:27.991 --- 10.0.0.2 ping statistics --- 00:24:27.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.991 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:24:27.991 00:24:27.991 --- 10.0.0.1 ping statistics --- 00:24:27.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.991 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2077974 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2077974 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2077974 ']' 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.991 18:23:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.991 [2024-11-19 18:23:28.485275] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:24:27.991 [2024-11-19 18:23:28.485338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.991 [2024-11-19 18:23:28.590073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.991 [2024-11-19 18:23:28.641987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.991 [2024-11-19 18:23:28.642045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.991 [2024-11-19 18:23:28.642054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.991 [2024-11-19 18:23:28.642061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.991 [2024-11-19 18:23:28.642068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.991 [2024-11-19 18:23:28.642796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.991 [2024-11-19 18:23:29.360724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.991 null0 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a7484db2bb274f849d0a99e9032a2e94 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.991 [2024-11-19 18:23:29.421111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.991 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.252 nvme0n1 00:24:28.252 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.252 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.252 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.252 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.252 [ 00:24:28.252 { 00:24:28.252 "name": "nvme0n1", 00:24:28.252 "aliases": [ 00:24:28.252 "a7484db2-bb27-4f84-9d0a-99e9032a2e94" 00:24:28.252 ], 00:24:28.252 "product_name": "NVMe disk", 00:24:28.252 "block_size": 512, 00:24:28.252 "num_blocks": 2097152, 00:24:28.252 "uuid": "a7484db2-bb27-4f84-9d0a-99e9032a2e94", 00:24:28.252 "numa_id": 0, 00:24:28.252 "assigned_rate_limits": { 00:24:28.252 "rw_ios_per_sec": 0, 00:24:28.252 "rw_mbytes_per_sec": 0, 00:24:28.252 "r_mbytes_per_sec": 0, 00:24:28.252 "w_mbytes_per_sec": 0 00:24:28.252 }, 00:24:28.252 "claimed": false, 00:24:28.252 "zoned": false, 00:24:28.252 "supported_io_types": { 00:24:28.252 "read": true, 00:24:28.252 "write": true, 00:24:28.252 "unmap": false, 00:24:28.252 "flush": true, 00:24:28.252 "reset": true, 00:24:28.252 "nvme_admin": true, 00:24:28.252 "nvme_io": true, 00:24:28.252 "nvme_io_md": false, 00:24:28.252 "write_zeroes": true, 00:24:28.252 "zcopy": false, 00:24:28.252 "get_zone_info": false, 00:24:28.252 "zone_management": false, 00:24:28.252 "zone_append": false, 00:24:28.252 "compare": true, 00:24:28.252 "compare_and_write": true, 00:24:28.252 "abort": true, 00:24:28.252 "seek_hole": false, 00:24:28.252 "seek_data": false, 00:24:28.252 "copy": true, 00:24:28.252 "nvme_iov_md": false 00:24:28.252 }, 00:24:28.252 "memory_domains": [ 00:24:28.252 { 00:24:28.252 "dma_device_id": "system", 00:24:28.252 "dma_device_type": 1 00:24:28.252 } 00:24:28.252 ], 00:24:28.252 "driver_specific": { 00:24:28.252 "nvme": [ 00:24:28.252 { 00:24:28.252 "trid": { 00:24:28.252 "trtype": "TCP", 00:24:28.252 "adrfam": "IPv4", 00:24:28.252 "traddr": "10.0.0.2", 00:24:28.252 "trsvcid": "4420", 00:24:28.252 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.252 }, 00:24:28.252 "ctrlr_data": { 00:24:28.252 "cntlid": 1, 00:24:28.252 "vendor_id": "0x8086", 00:24:28.252 "model_number": "SPDK bdev Controller", 00:24:28.252 "serial_number": "00000000000000000000", 00:24:28.252 "firmware_revision": "25.01", 00:24:28.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.252 "oacs": { 00:24:28.252 "security": 0, 00:24:28.252 "format": 0, 00:24:28.252 "firmware": 0, 00:24:28.252 "ns_manage": 0 00:24:28.252 }, 00:24:28.252 "multi_ctrlr": true, 00:24:28.252 "ana_reporting": false 00:24:28.252 }, 00:24:28.252 "vs": { 00:24:28.252 "nvme_version": "1.3" 00:24:28.252 }, 00:24:28.252 "ns_data": { 00:24:28.252 "id": 1, 00:24:28.252 "can_share": true 00:24:28.252 } 00:24:28.252 } 00:24:28.252 ], 00:24:28.252 "mp_policy": "active_passive" 00:24:28.252 } 00:24:28.252 } 00:24:28.252 ] 00:24:28.252 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.252 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:28.252 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.252 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.252 [2024-11-19 18:23:29.698886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:28.252 [2024-11-19 18:23:29.698982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25bdce0 (9): Bad file descriptor 00:24:28.513 [2024-11-19 18:23:29.831270] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 [ 00:24:28.513 { 00:24:28.513 "name": "nvme0n1", 00:24:28.513 "aliases": [ 00:24:28.513 "a7484db2-bb27-4f84-9d0a-99e9032a2e94" 00:24:28.513 ], 00:24:28.513 "product_name": "NVMe disk", 00:24:28.513 "block_size": 512, 00:24:28.513 "num_blocks": 2097152, 00:24:28.513 "uuid": "a7484db2-bb27-4f84-9d0a-99e9032a2e94", 00:24:28.513 "numa_id": 0, 00:24:28.513 "assigned_rate_limits": { 00:24:28.513 "rw_ios_per_sec": 0, 00:24:28.513 "rw_mbytes_per_sec": 0, 00:24:28.513 "r_mbytes_per_sec": 0, 00:24:28.513 "w_mbytes_per_sec": 0 00:24:28.513 }, 00:24:28.513 "claimed": false, 00:24:28.513 "zoned": false, 00:24:28.513 "supported_io_types": { 00:24:28.513 "read": true, 00:24:28.513 "write": true, 00:24:28.513 "unmap": false, 00:24:28.513 "flush": true, 00:24:28.513 "reset": true, 00:24:28.513 "nvme_admin": true, 00:24:28.513 "nvme_io": true, 00:24:28.513 "nvme_io_md": false, 00:24:28.513 "write_zeroes": true, 00:24:28.513 "zcopy": false, 00:24:28.513 "get_zone_info": false, 00:24:28.513 "zone_management": false, 00:24:28.513 "zone_append": false, 00:24:28.513 "compare": true, 00:24:28.513 "compare_and_write": true, 00:24:28.513 "abort": true, 00:24:28.513 "seek_hole": false, 00:24:28.513 "seek_data": false, 00:24:28.513 "copy": true, 00:24:28.513 "nvme_iov_md": false 00:24:28.513 }, 00:24:28.513 "memory_domains": [ 00:24:28.513 { 00:24:28.513 "dma_device_id": "system", 00:24:28.513 "dma_device_type": 1 00:24:28.513 } 00:24:28.513 ], 00:24:28.513 "driver_specific": { 00:24:28.513 "nvme": [ 00:24:28.513 { 00:24:28.513 "trid": { 00:24:28.513 "trtype": "TCP", 00:24:28.513 "adrfam": "IPv4", 00:24:28.513 "traddr": "10.0.0.2", 00:24:28.513 "trsvcid": "4420", 00:24:28.513 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.513 }, 00:24:28.513 "ctrlr_data": { 00:24:28.513 "cntlid": 2, 00:24:28.513 "vendor_id": "0x8086", 00:24:28.513 "model_number": "SPDK bdev Controller", 00:24:28.513 "serial_number": "00000000000000000000", 00:24:28.513 "firmware_revision": "25.01", 00:24:28.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.513 "oacs": { 00:24:28.513 "security": 0, 00:24:28.513 "format": 0, 00:24:28.513 "firmware": 0, 00:24:28.513 "ns_manage": 0 00:24:28.513 }, 00:24:28.513 "multi_ctrlr": true, 00:24:28.513 "ana_reporting": false 00:24:28.513 }, 00:24:28.513 "vs": { 00:24:28.513 "nvme_version": "1.3" 00:24:28.513 }, 00:24:28.513 "ns_data": { 00:24:28.513 "id": 1, 00:24:28.513 "can_share": true 00:24:28.513 } 00:24:28.513 } 00:24:28.513 ], 00:24:28.513 "mp_policy": "active_passive" 00:24:28.513 } 00:24:28.513 } 00:24:28.513 ] 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7WxCUKLpE8 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7WxCUKLpE8 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.7WxCUKLpE8 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 [2024-11-19 18:23:29.919651] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.513 [2024-11-19 18:23:29.919815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.513 18:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.513 [2024-11-19 18:23:29.943733] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:28.774 nvme0n1 00:24:28.774 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.774 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.774 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.774 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.774 [ 00:24:28.774 { 00:24:28.774 "name": "nvme0n1", 00:24:28.774 "aliases": [ 00:24:28.774 "a7484db2-bb27-4f84-9d0a-99e9032a2e94" 00:24:28.774 ], 00:24:28.774 "product_name": "NVMe disk", 00:24:28.774 "block_size": 512, 00:24:28.774 "num_blocks": 2097152, 00:24:28.774 "uuid": "a7484db2-bb27-4f84-9d0a-99e9032a2e94", 00:24:28.774 "numa_id": 0, 00:24:28.774 "assigned_rate_limits": { 00:24:28.774 "rw_ios_per_sec": 0, 00:24:28.774 "rw_mbytes_per_sec": 0, 00:24:28.774 "r_mbytes_per_sec": 0, 00:24:28.774 "w_mbytes_per_sec": 0 00:24:28.774 }, 00:24:28.774 "claimed": false, 00:24:28.774 "zoned": false, 00:24:28.774 "supported_io_types": { 00:24:28.774 "read": true, 00:24:28.774 "write": true, 00:24:28.774 "unmap": false, 00:24:28.774 "flush": true, 00:24:28.774 "reset": true, 00:24:28.774 "nvme_admin": true, 00:24:28.774 "nvme_io": true, 00:24:28.774 "nvme_io_md": false, 00:24:28.774 "write_zeroes": true, 00:24:28.774 "zcopy": false, 00:24:28.774 "get_zone_info": false, 00:24:28.774 "zone_management": false, 00:24:28.774 "zone_append": false, 00:24:28.774 "compare": true, 00:24:28.774 "compare_and_write": true, 00:24:28.774 "abort": true, 00:24:28.774 "seek_hole": false, 00:24:28.774 "seek_data": false, 00:24:28.774 "copy": true, 00:24:28.774 "nvme_iov_md": false 00:24:28.774 }, 00:24:28.774 "memory_domains": [ 00:24:28.774 { 00:24:28.774 "dma_device_id": "system", 00:24:28.774 "dma_device_type": 1 00:24:28.774 } 00:24:28.774 ], 00:24:28.774 "driver_specific": { 00:24:28.774 "nvme": [ 00:24:28.774 { 00:24:28.774 "trid": { 00:24:28.774 "trtype": "TCP", 00:24:28.774 "adrfam": "IPv4", 00:24:28.774 "traddr": "10.0.0.2", 00:24:28.774 "trsvcid": "4421", 00:24:28.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.774 }, 00:24:28.774 "ctrlr_data": { 00:24:28.774 "cntlid": 3, 00:24:28.774 "vendor_id": "0x8086", 00:24:28.774 "model_number": "SPDK bdev Controller", 00:24:28.774 "serial_number": "00000000000000000000", 00:24:28.774 "firmware_revision": "25.01", 00:24:28.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.774 "oacs": { 00:24:28.774 "security": 0, 00:24:28.774 "format": 0, 00:24:28.774 "firmware": 0, 00:24:28.774 "ns_manage": 0 00:24:28.774 }, 00:24:28.774 "multi_ctrlr": true, 00:24:28.774 "ana_reporting": false 00:24:28.774 }, 00:24:28.774 "vs": { 00:24:28.774 "nvme_version": "1.3" 00:24:28.774 }, 00:24:28.774 "ns_data": { 00:24:28.774 "id": 1, 00:24:28.774 "can_share": true 00:24:28.774 } 00:24:28.774 } 00:24:28.774 ], 00:24:28.775 "mp_policy": "active_passive" 00:24:28.775 } 00:24:28.775 } 00:24:28.775 ] 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.7WxCUKLpE8 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.775 rmmod nvme_tcp 00:24:28.775 rmmod nvme_fabrics 00:24:28.775 rmmod nvme_keyring 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2077974 ']' 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2077974 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2077974 ']' 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2077974 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2077974 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2077974' 00:24:28.775 killing process with pid 2077974 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2077974 00:24:28.775 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2077974 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.036 18:23:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.578 00:24:31.578 real 0m11.793s 00:24:31.578 user 0m4.223s 00:24:31.578 sys 0m6.154s 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:31.578 ************************************ 00:24:31.578 END TEST nvmf_async_init 00:24:31.578 ************************************ 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.578 ************************************ 00:24:31.578 START TEST dma 00:24:31.578 ************************************ 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:31.578 * Looking for test storage... 00:24:31.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.578 --rc genhtml_branch_coverage=1 00:24:31.578 --rc genhtml_function_coverage=1 00:24:31.578 --rc genhtml_legend=1 00:24:31.578 --rc geninfo_all_blocks=1 00:24:31.578 --rc geninfo_unexecuted_blocks=1 00:24:31.578 00:24:31.578 ' 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.578 --rc genhtml_branch_coverage=1 00:24:31.578 --rc genhtml_function_coverage=1 00:24:31.578 --rc genhtml_legend=1 00:24:31.578 --rc geninfo_all_blocks=1 00:24:31.578 --rc geninfo_unexecuted_blocks=1 00:24:31.578 00:24:31.578 ' 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.578 --rc genhtml_branch_coverage=1 00:24:31.578 --rc genhtml_function_coverage=1 00:24:31.578 --rc genhtml_legend=1 00:24:31.578 --rc geninfo_all_blocks=1 00:24:31.578 --rc geninfo_unexecuted_blocks=1 00:24:31.578 00:24:31.578 ' 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.578 --rc genhtml_branch_coverage=1 00:24:31.578 --rc genhtml_function_coverage=1 00:24:31.578 --rc genhtml_legend=1 00:24:31.578 --rc geninfo_all_blocks=1 00:24:31.578 --rc geninfo_unexecuted_blocks=1 00:24:31.578 00:24:31.578 ' 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.578 18:23:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:31.579 00:24:31.579 real 0m0.238s 00:24:31.579 user 0m0.144s 00:24:31.579 sys 0m0.110s 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:31.579 ************************************ 00:24:31.579 END TEST dma 00:24:31.579 ************************************ 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.579 ************************************ 00:24:31.579 START TEST nvmf_identify 00:24:31.579 ************************************ 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:31.579 * Looking for test storage... 00:24:31.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.579 18:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:31.579 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.839 --rc genhtml_branch_coverage=1 00:24:31.839 --rc genhtml_function_coverage=1 00:24:31.839 --rc genhtml_legend=1 00:24:31.839 --rc geninfo_all_blocks=1 00:24:31.839 --rc geninfo_unexecuted_blocks=1 00:24:31.839 00:24:31.839 ' 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.839 --rc genhtml_branch_coverage=1 00:24:31.839 --rc genhtml_function_coverage=1 00:24:31.839 --rc genhtml_legend=1 00:24:31.839 --rc geninfo_all_blocks=1 00:24:31.839 --rc geninfo_unexecuted_blocks=1 00:24:31.839 00:24:31.839 ' 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.839 --rc genhtml_branch_coverage=1 00:24:31.839 --rc genhtml_function_coverage=1 00:24:31.839 --rc genhtml_legend=1 00:24:31.839 --rc geninfo_all_blocks=1 00:24:31.839 --rc geninfo_unexecuted_blocks=1 00:24:31.839 00:24:31.839 ' 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.839 --rc genhtml_branch_coverage=1 00:24:31.839 --rc genhtml_function_coverage=1 00:24:31.839 --rc genhtml_legend=1 00:24:31.839 --rc geninfo_all_blocks=1 00:24:31.839 --rc geninfo_unexecuted_blocks=1 00:24:31.839 00:24:31.839 ' 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.839 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.840 18:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.978 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.978 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.978 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.978 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.978 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:24:39.979 00:24:39.979 --- 10.0.0.2 ping statistics --- 00:24:39.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.979 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:24:39.979 00:24:39.979 --- 10.0.0.1 ping statistics --- 00:24:39.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.979 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2082503 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2082503 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2082503 ']' 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.979 18:23:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:39.979 [2024-11-19 18:23:40.670187] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:24:39.979 [2024-11-19 18:23:40.670256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.979 [2024-11-19 18:23:40.778758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.979 [2024-11-19 18:23:40.833231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.979 [2024-11-19 18:23:40.833284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.979 [2024-11-19 18:23:40.833294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.979 [2024-11-19 18:23:40.833301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.979 [2024-11-19 18:23:40.833311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.979 [2024-11-19 18:23:40.835311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.979 [2024-11-19 18:23:40.835484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.979 [2024-11-19 18:23:40.835683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.979 [2024-11-19 18:23:40.835684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.240 [2024-11-19 18:23:41.504154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.240 Malloc0 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.240 [2024-11-19 18:23:41.626250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.240 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.240 [ 00:24:40.240 { 00:24:40.240 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:40.240 "subtype": "Discovery", 00:24:40.241 "listen_addresses": [ 00:24:40.241 { 00:24:40.241 "trtype": "TCP", 00:24:40.241 "adrfam": "IPv4", 00:24:40.241 "traddr": "10.0.0.2", 00:24:40.241 "trsvcid": "4420" 00:24:40.241 } 00:24:40.241 ], 00:24:40.241 "allow_any_host": true, 00:24:40.241 "hosts": [] 00:24:40.241 }, 00:24:40.241 { 00:24:40.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.241 "subtype": "NVMe", 00:24:40.241 "listen_addresses": [ 00:24:40.241 { 00:24:40.241 "trtype": "TCP", 00:24:40.241 "adrfam": "IPv4", 00:24:40.241 "traddr": "10.0.0.2", 00:24:40.241 "trsvcid": "4420" 00:24:40.241 } 00:24:40.241 ], 00:24:40.241 "allow_any_host": true, 00:24:40.241 "hosts": [], 00:24:40.241 "serial_number": "SPDK00000000000001", 00:24:40.241 "model_number": "SPDK bdev Controller", 00:24:40.241 "max_namespaces": 32, 00:24:40.241 "min_cntlid": 1, 00:24:40.241 "max_cntlid": 65519, 00:24:40.241 "namespaces": [ 00:24:40.241 { 00:24:40.241 "nsid": 1, 00:24:40.241 "bdev_name": "Malloc0", 00:24:40.241 "name": "Malloc0", 00:24:40.241 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:40.241 "eui64": "ABCDEF0123456789", 00:24:40.241 "uuid": "ef6bf936-173e-43f1-8bdf-fa372e40da02" 00:24:40.241 } 00:24:40.241 ] 00:24:40.241 } 00:24:40.241 ] 00:24:40.241 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.241 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:40.241 [2024-11-19 18:23:41.690580] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:24:40.241 [2024-11-19 18:23:41.690629] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082842 ] 00:24:40.505 [2024-11-19 18:23:41.744641] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:40.505 [2024-11-19 18:23:41.744716] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:40.505 [2024-11-19 18:23:41.744722] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:40.505 [2024-11-19 18:23:41.744737] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:40.505 [2024-11-19 18:23:41.744751] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:40.505 [2024-11-19 18:23:41.748571] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:40.505 [2024-11-19 18:23:41.748627] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x724690 0 00:24:40.505 [2024-11-19 18:23:41.756179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:40.505 [2024-11-19 18:23:41.756197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:40.505 [2024-11-19 18:23:41.756203] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:40.505 [2024-11-19 18:23:41.756206] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:40.505 [2024-11-19 18:23:41.756252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.505 [2024-11-19 18:23:41.756259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.505 [2024-11-19 18:23:41.756264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.505 [2024-11-19 18:23:41.756282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:40.505 [2024-11-19 18:23:41.756305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.505 [2024-11-19 18:23:41.764172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.505 [2024-11-19 18:23:41.764193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.505 [2024-11-19 18:23:41.764197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.505 [2024-11-19 18:23:41.764202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.505 [2024-11-19 18:23:41.764217] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:40.505 [2024-11-19 18:23:41.764226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:40.505 [2024-11-19 18:23:41.764232] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:40.505 [2024-11-19 18:23:41.764254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.505 [2024-11-19 18:23:41.764259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.505 [2024-11-19 18:23:41.764262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.505 [2024-11-19 18:23:41.764271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-11-19 18:23:41.764288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.505 [2024-11-19 18:23:41.764502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.505 [2024-11-19 18:23:41.764509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.505 [2024-11-19 18:23:41.764512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.505 [2024-11-19 18:23:41.764516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.505 [2024-11-19 18:23:41.764523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:40.505 [2024-11-19 18:23:41.764531] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:40.505 [2024-11-19 18:23:41.764538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.505 [2024-11-19 18:23:41.764542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.505 [2024-11-19 18:23:41.764545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.505 [2024-11-19 18:23:41.764552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-11-19 18:23:41.764563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.505 [2024-11-19 18:23:41.764764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.505 [2024-11-19 18:23:41.764771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.505 [2024-11-19 18:23:41.764774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.764778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.506 [2024-11-19 18:23:41.764784] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:40.506 [2024-11-19 18:23:41.764793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:40.506 [2024-11-19 18:23:41.764799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.764803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.764807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.506 [2024-11-19 18:23:41.764813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.506 [2024-11-19 18:23:41.764824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.506 [2024-11-19 18:23:41.765016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.506 [2024-11-19 18:23:41.765023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.506 [2024-11-19 18:23:41.765026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.506 [2024-11-19 18:23:41.765036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:40.506 [2024-11-19 18:23:41.765046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.506 [2024-11-19 18:23:41.765064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.506 [2024-11-19 18:23:41.765074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.506 [2024-11-19 18:23:41.765249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.506 [2024-11-19 18:23:41.765256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.506 [2024-11-19 18:23:41.765259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.506 [2024-11-19 18:23:41.765268] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:40.506 [2024-11-19 18:23:41.765273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:40.506 [2024-11-19 18:23:41.765281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:40.506 [2024-11-19 18:23:41.765394] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:40.506 [2024-11-19 18:23:41.765399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:40.506 [2024-11-19 18:23:41.765409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.506 [2024-11-19 18:23:41.765423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.506 [2024-11-19 18:23:41.765434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.506 [2024-11-19 18:23:41.765648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.506 [2024-11-19 18:23:41.765655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.506 [2024-11-19 18:23:41.765658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.506 [2024-11-19 18:23:41.765667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:40.506 [2024-11-19 18:23:41.765678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.506 [2024-11-19 18:23:41.765692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.506 [2024-11-19 18:23:41.765703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.506 [2024-11-19 18:23:41.765918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.506 [2024-11-19 18:23:41.765924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.506 [2024-11-19 18:23:41.765927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.506 [2024-11-19 18:23:41.765936] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:40.506 [2024-11-19 18:23:41.765944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:40.506 [2024-11-19 18:23:41.765952] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:40.506 [2024-11-19 18:23:41.765967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:40.506 [2024-11-19 18:23:41.765977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.765981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.506 [2024-11-19 18:23:41.765988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.506 [2024-11-19 18:23:41.765999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.506 [2024-11-19 18:23:41.766238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.506 [2024-11-19 18:23:41.766246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.506 [2024-11-19 18:23:41.766250] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766254] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=4096, cccid=0 00:24:40.506 [2024-11-19 18:23:41.766259] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786100) on tqpair(0x724690): expected_datao=0, payload_size=4096 00:24:40.506 [2024-11-19 18:23:41.766264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766272] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766277] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.506 [2024-11-19 18:23:41.766416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.506 [2024-11-19 18:23:41.766420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.506 [2024-11-19 18:23:41.766433] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:40.506 [2024-11-19 18:23:41.766438] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:40.506 [2024-11-19 18:23:41.766443] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:40.506 [2024-11-19 18:23:41.766452] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:40.506 [2024-11-19 18:23:41.766458] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:40.506 [2024-11-19 18:23:41.766463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:40.506 [2024-11-19 18:23:41.766474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:40.506 [2024-11-19 18:23:41.766482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.506 [2024-11-19 18:23:41.766497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:40.506 [2024-11-19 18:23:41.766508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.506 [2024-11-19 18:23:41.766731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.506 [2024-11-19 18:23:41.766740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.506 [2024-11-19 18:23:41.766744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.506 [2024-11-19 18:23:41.766756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x724690) 00:24:40.506 [2024-11-19 18:23:41.766770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.506 [2024-11-19 18:23:41.766777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766780] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x724690) 00:24:40.506 [2024-11-19 18:23:41.766790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.506 [2024-11-19 18:23:41.766796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x724690) 00:24:40.506 [2024-11-19 18:23:41.766809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.506 [2024-11-19 18:23:41.766815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.506 [2024-11-19 18:23:41.766818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.766822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.507 [2024-11-19 18:23:41.766827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.507 [2024-11-19 18:23:41.766832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:40.507 [2024-11-19 18:23:41.766841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:40.507 [2024-11-19 18:23:41.766847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.766851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:40.507 [2024-11-19 18:23:41.766858] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.507 [2024-11-19 18:23:41.766869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786100, cid 0, qid 0 00:24:40.507 [2024-11-19 18:23:41.766875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786280, cid 1, qid 0 00:24:40.507 [2024-11-19 18:23:41.766879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786400, cid 2, qid 0 00:24:40.507 [2024-11-19 18:23:41.766884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.507 [2024-11-19 18:23:41.766889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:40.507 [2024-11-19 18:23:41.767125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.507 [2024-11-19 18:23:41.767131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.507 [2024-11-19 18:23:41.767135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:40.507 [2024-11-19 18:23:41.767147] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:40.507 [2024-11-19 18:23:41.767155] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:40.507 [2024-11-19 18:23:41.767178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:40.507 [2024-11-19 18:23:41.767189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.507 [2024-11-19 18:23:41.767200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:40.507 [2024-11-19 18:23:41.767434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.507 [2024-11-19 18:23:41.767440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.507 [2024-11-19 18:23:41.767444] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767448] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=4096, cccid=4 00:24:40.507 [2024-11-19 18:23:41.767452] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786700) on tqpair(0x724690): expected_datao=0, payload_size=4096 00:24:40.507 [2024-11-19 18:23:41.767456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767463] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767467] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.507 [2024-11-19 18:23:41.767636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.507 [2024-11-19 18:23:41.767640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:40.507 [2024-11-19 18:23:41.767657] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:40.507 [2024-11-19 18:23:41.767688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:40.507 [2024-11-19 18:23:41.767699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.507 [2024-11-19 18:23:41.767706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.767713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x724690) 00:24:40.507 [2024-11-19 18:23:41.767719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.507 [2024-11-19 18:23:41.767733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:40.507 [2024-11-19 18:23:41.767739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786880, cid 5, qid 0 00:24:40.507 [2024-11-19 18:23:41.767988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.507 [2024-11-19 18:23:41.767994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.507 [2024-11-19 18:23:41.767998] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.768002] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=1024, cccid=4 00:24:40.507 [2024-11-19 18:23:41.768006] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786700) on tqpair(0x724690): expected_datao=0, payload_size=1024 00:24:40.507 [2024-11-19 18:23:41.768010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.768017] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.768023] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.768029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.507 [2024-11-19 18:23:41.768035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.507 [2024-11-19 18:23:41.768039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.768042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786880) on tqpair=0x724690 00:24:40.507 [2024-11-19 18:23:41.809353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.507 [2024-11-19 18:23:41.809366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.507 [2024-11-19 18:23:41.809370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.809375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:40.507 [2024-11-19 18:23:41.809390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.809395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:40.507 [2024-11-19 18:23:41.809403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.507 [2024-11-19 18:23:41.809420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:40.507 [2024-11-19 18:23:41.809678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.507 [2024-11-19 18:23:41.809686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.507 [2024-11-19 18:23:41.809689] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.809694] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=3072, cccid=4 00:24:40.507 [2024-11-19 18:23:41.809698] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786700) on tqpair(0x724690): expected_datao=0, payload_size=3072 00:24:40.507 [2024-11-19 18:23:41.809703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.809710] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.809715] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.809880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.507 [2024-11-19 18:23:41.809888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.507 [2024-11-19 18:23:41.809892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.809896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:40.507 [2024-11-19 18:23:41.809905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.809909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x724690) 00:24:40.507 [2024-11-19 18:23:41.809916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.507 [2024-11-19 18:23:41.809931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786700, cid 4, qid 0 00:24:40.507 [2024-11-19 18:23:41.810171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.507 [2024-11-19 18:23:41.810178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.507 [2024-11-19 18:23:41.810182] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.810186] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x724690): datao=0, datal=8, cccid=4 00:24:40.507 [2024-11-19 18:23:41.810191] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x786700) on tqpair(0x724690): expected_datao=0, payload_size=8 00:24:40.507 [2024-11-19 18:23:41.810195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.810202] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.810206] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.855171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.507 [2024-11-19 18:23:41.855182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.507 [2024-11-19 18:23:41.855186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.507 [2024-11-19 18:23:41.855190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786700) on tqpair=0x724690 00:24:40.507 ===================================================== 00:24:40.507 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:40.507 ===================================================== 00:24:40.507 Controller Capabilities/Features 00:24:40.507 ================================ 00:24:40.507 Vendor ID: 0000 00:24:40.507 Subsystem Vendor ID: 0000 00:24:40.507 Serial Number: .................... 00:24:40.507 Model Number: ........................................ 00:24:40.507 Firmware Version: 25.01 00:24:40.507 Recommended Arb Burst: 0 00:24:40.507 IEEE OUI Identifier: 00 00 00 00:24:40.507 Multi-path I/O 00:24:40.507 May have multiple subsystem ports: No 00:24:40.507 May have multiple controllers: No 00:24:40.507 Associated with SR-IOV VF: No 00:24:40.507 Max Data Transfer Size: 131072 00:24:40.508 Max Number of Namespaces: 0 00:24:40.508 Max Number of I/O Queues: 1024 00:24:40.508 NVMe Specification Version (VS): 1.3 00:24:40.508 NVMe Specification Version (Identify): 1.3 00:24:40.508 Maximum Queue Entries: 128 00:24:40.508 Contiguous Queues Required: Yes 00:24:40.508 Arbitration Mechanisms Supported 00:24:40.508 Weighted Round Robin: Not Supported 00:24:40.508 Vendor Specific: Not Supported 00:24:40.508 Reset Timeout: 15000 ms 00:24:40.508 Doorbell Stride: 4 bytes 00:24:40.508 NVM Subsystem Reset: Not Supported 00:24:40.508 Command Sets Supported 00:24:40.508 NVM Command Set: Supported 00:24:40.508 Boot Partition: Not Supported 00:24:40.508 Memory Page Size Minimum: 4096 bytes 00:24:40.508 Memory Page Size Maximum: 4096 bytes 00:24:40.508 Persistent Memory Region: Not Supported 00:24:40.508 Optional Asynchronous Events Supported 00:24:40.508 Namespace Attribute Notices: Not Supported 00:24:40.508 Firmware Activation Notices: Not Supported 00:24:40.508 ANA Change Notices: Not Supported 00:24:40.508 PLE Aggregate Log Change Notices: Not Supported 00:24:40.508 LBA Status Info Alert Notices: Not Supported 00:24:40.508 EGE Aggregate Log Change Notices: Not Supported 00:24:40.508 Normal NVM Subsystem Shutdown event: Not Supported 00:24:40.508 Zone Descriptor Change Notices: Not Supported 00:24:40.508 Discovery Log Change Notices: Supported 00:24:40.508 Controller Attributes 00:24:40.508 128-bit Host Identifier: Not Supported 00:24:40.508 Non-Operational Permissive Mode: Not Supported 00:24:40.508 NVM Sets: Not Supported 00:24:40.508 Read Recovery Levels: Not Supported 00:24:40.508 Endurance Groups: Not Supported 00:24:40.508 Predictable Latency Mode: Not Supported 00:24:40.508 Traffic Based Keep ALive: Not Supported 00:24:40.508 Namespace Granularity: Not Supported 00:24:40.508 SQ Associations: Not Supported 00:24:40.508 UUID List: Not Supported 00:24:40.508 Multi-Domain Subsystem: Not Supported 00:24:40.508 Fixed Capacity Management: Not Supported 00:24:40.508 Variable Capacity Management: Not Supported 00:24:40.508 Delete Endurance Group: Not Supported 00:24:40.508 Delete NVM Set: Not Supported 00:24:40.508 Extended LBA Formats Supported: Not Supported 00:24:40.508 Flexible Data Placement Supported: Not Supported 00:24:40.508 00:24:40.508 Controller Memory Buffer Support 00:24:40.508 ================================ 00:24:40.508 Supported: No 00:24:40.508 00:24:40.508 Persistent Memory Region Support 00:24:40.508 ================================ 00:24:40.508 Supported: No 00:24:40.508 00:24:40.508 Admin Command Set Attributes 00:24:40.508 ============================ 00:24:40.508 Security Send/Receive: Not Supported 00:24:40.508 Format NVM: Not Supported 00:24:40.508 Firmware Activate/Download: Not Supported 00:24:40.508 Namespace Management: Not Supported 00:24:40.508 Device Self-Test: Not Supported 00:24:40.508 Directives: Not Supported 00:24:40.508 NVMe-MI: Not Supported 00:24:40.508 Virtualization Management: Not Supported 00:24:40.508 Doorbell Buffer Config: Not Supported 00:24:40.508 Get LBA Status Capability: Not Supported 00:24:40.508 Command & Feature Lockdown Capability: Not Supported 00:24:40.508 Abort Command Limit: 1 00:24:40.508 Async Event Request Limit: 4 00:24:40.508 Number of Firmware Slots: N/A 00:24:40.508 Firmware Slot 1 Read-Only: N/A 00:24:40.508 Firmware Activation Without Reset: N/A 00:24:40.508 Multiple Update Detection Support: N/A 00:24:40.508 Firmware Update Granularity: No Information Provided 00:24:40.508 Per-Namespace SMART Log: No 00:24:40.508 Asymmetric Namespace Access Log Page: Not Supported 00:24:40.508 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:40.508 Command Effects Log Page: Not Supported 00:24:40.508 Get Log Page Extended Data: Supported 00:24:40.508 Telemetry Log Pages: Not Supported 00:24:40.508 Persistent Event Log Pages: Not Supported 00:24:40.508 Supported Log Pages Log Page: May Support 00:24:40.508 Commands Supported & Effects Log Page: Not Supported 00:24:40.508 Feature Identifiers & Effects Log Page:May Support 00:24:40.508 NVMe-MI Commands & Effects Log Page: May Support 00:24:40.508 Data Area 4 for Telemetry Log: Not Supported 00:24:40.508 Error Log Page Entries Supported: 128 00:24:40.508 Keep Alive: Not Supported 00:24:40.508 00:24:40.508 NVM Command Set Attributes 00:24:40.508 ========================== 00:24:40.508 Submission Queue Entry Size 00:24:40.508 Max: 1 00:24:40.508 Min: 1 00:24:40.508 Completion Queue Entry Size 00:24:40.508 Max: 1 00:24:40.508 Min: 1 00:24:40.508 Number of Namespaces: 0 00:24:40.508 Compare Command: Not Supported 00:24:40.508 Write Uncorrectable Command: Not Supported 00:24:40.508 Dataset Management Command: Not Supported 00:24:40.508 Write Zeroes Command: Not Supported 00:24:40.508 Set Features Save Field: Not Supported 00:24:40.508 Reservations: Not Supported 00:24:40.508 Timestamp: Not Supported 00:24:40.508 Copy: Not Supported 00:24:40.508 Volatile Write Cache: Not Present 00:24:40.508 Atomic Write Unit (Normal): 1 00:24:40.508 Atomic Write Unit (PFail): 1 00:24:40.508 Atomic Compare & Write Unit: 1 00:24:40.508 Fused Compare & Write: Supported 00:24:40.508 Scatter-Gather List 00:24:40.508 SGL Command Set: Supported 00:24:40.508 SGL Keyed: Supported 00:24:40.508 SGL Bit Bucket Descriptor: Not Supported 00:24:40.508 SGL Metadata Pointer: Not Supported 00:24:40.508 Oversized SGL: Not Supported 00:24:40.508 SGL Metadata Address: Not Supported 00:24:40.508 SGL Offset: Supported 00:24:40.508 Transport SGL Data Block: Not Supported 00:24:40.508 Replay Protected Memory Block: Not Supported 00:24:40.508 00:24:40.508 Firmware Slot Information 00:24:40.508 ========================= 00:24:40.508 Active slot: 0 00:24:40.508 00:24:40.508 00:24:40.508 Error Log 00:24:40.508 ========= 00:24:40.508 00:24:40.508 Active Namespaces 00:24:40.508 ================= 00:24:40.508 Discovery Log Page 00:24:40.508 ================== 00:24:40.508 Generation Counter: 2 00:24:40.508 Number of Records: 2 00:24:40.508 Record Format: 0 00:24:40.508 00:24:40.508 Discovery Log Entry 0 00:24:40.508 ---------------------- 00:24:40.508 Transport Type: 3 (TCP) 00:24:40.508 Address Family: 1 (IPv4) 00:24:40.508 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:40.508 Entry Flags: 00:24:40.508 Duplicate Returned Information: 1 00:24:40.508 Explicit Persistent Connection Support for Discovery: 1 00:24:40.508 Transport Requirements: 00:24:40.508 Secure Channel: Not Required 00:24:40.508 Port ID: 0 (0x0000) 00:24:40.508 Controller ID: 65535 (0xffff) 00:24:40.508 Admin Max SQ Size: 128 00:24:40.508 Transport Service Identifier: 4420 00:24:40.508 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:40.508 Transport Address: 10.0.0.2 00:24:40.508 Discovery Log Entry 1 00:24:40.508 ---------------------- 00:24:40.508 Transport Type: 3 (TCP) 00:24:40.508 Address Family: 1 (IPv4) 00:24:40.508 Subsystem Type: 2 (NVM Subsystem) 00:24:40.508 Entry Flags: 00:24:40.508 Duplicate Returned Information: 0 00:24:40.508 Explicit Persistent Connection Support for Discovery: 0 00:24:40.508 Transport Requirements: 00:24:40.508 Secure Channel: Not Required 00:24:40.508 Port ID: 0 (0x0000) 00:24:40.508 Controller ID: 65535 (0xffff) 00:24:40.508 Admin Max SQ Size: 128 00:24:40.508 Transport Service Identifier: 4420 00:24:40.508 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:40.508 Transport Address: 10.0.0.2 [2024-11-19 18:23:41.855297] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:40.508 [2024-11-19 18:23:41.855310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786100) on tqpair=0x724690 00:24:40.508 [2024-11-19 18:23:41.855317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.508 [2024-11-19 18:23:41.855323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786280) on tqpair=0x724690 00:24:40.508 [2024-11-19 18:23:41.855328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.508 [2024-11-19 18:23:41.855333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786400) on tqpair=0x724690 00:24:40.508 [2024-11-19 18:23:41.855337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.508 [2024-11-19 18:23:41.855342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.508 [2024-11-19 18:23:41.855347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.508 [2024-11-19 18:23:41.855360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.508 [2024-11-19 18:23:41.855365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.508 [2024-11-19 18:23:41.855368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.855376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.855391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.855605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.855612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.855615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.855619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.855626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.855630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.855634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.855641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.855654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.855856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.855862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.855866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.855869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.855875] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:40.509 [2024-11-19 18:23:41.855880] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:40.509 [2024-11-19 18:23:41.855889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.855893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.855899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.855906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.855916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.856124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.856131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.856134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.856148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.856170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.856180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.856390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.856396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.856400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.856415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.856430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.856440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.856629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.856636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.856639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.856654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.856674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.856685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.856890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.856896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.856899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.856913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.856920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.856929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.856940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.857163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.857171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.857174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.857188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.857203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.857214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.857431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.857438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.857442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.857456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.857470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.857481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.857711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.857717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.857720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.857734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.857749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.857759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.857959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.857967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.857970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.857985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.857994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.858001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.858013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.858210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.858218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.858222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.858226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.509 [2024-11-19 18:23:41.858236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.858240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.509 [2024-11-19 18:23:41.858243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.509 [2024-11-19 18:23:41.858250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.509 [2024-11-19 18:23:41.858260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.509 [2024-11-19 18:23:41.858462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.509 [2024-11-19 18:23:41.858468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.509 [2024-11-19 18:23:41.858472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.858476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.510 [2024-11-19 18:23:41.858485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.858489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.858493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.510 [2024-11-19 18:23:41.858499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.510 [2024-11-19 18:23:41.858509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.510 [2024-11-19 18:23:41.858725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.510 [2024-11-19 18:23:41.858731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.510 [2024-11-19 18:23:41.858734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.858738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.510 [2024-11-19 18:23:41.858748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.858752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.858755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.510 [2024-11-19 18:23:41.858762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.510 [2024-11-19 18:23:41.858772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.510 [2024-11-19 18:23:41.858952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.510 [2024-11-19 18:23:41.858958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.510 [2024-11-19 18:23:41.858962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.858966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.510 [2024-11-19 18:23:41.858975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.858979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.858983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.510 [2024-11-19 18:23:41.858990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.510 [2024-11-19 18:23:41.859000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.510 [2024-11-19 18:23:41.863166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.510 [2024-11-19 18:23:41.863174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.510 [2024-11-19 18:23:41.863178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.863182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.510 [2024-11-19 18:23:41.863192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.863196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.863200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x724690) 00:24:40.510 [2024-11-19 18:23:41.863207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.510 [2024-11-19 18:23:41.863218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x786580, cid 3, qid 0 00:24:40.510 [2024-11-19 18:23:41.863414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.510 [2024-11-19 18:23:41.863421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.510 [2024-11-19 18:23:41.863424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.510 [2024-11-19 18:23:41.863428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x786580) on tqpair=0x724690 00:24:40.510 [2024-11-19 18:23:41.863436] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:40.510 00:24:40.510 18:23:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:40.510 [2024-11-19 18:23:41.911852] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:24:40.510 [2024-11-19 18:23:41.911898] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082848 ] 00:24:40.510 [2024-11-19 18:23:41.966657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:40.510 [2024-11-19 18:23:41.966723] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:40.510 [2024-11-19 18:23:41.966728] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:40.510 [2024-11-19 18:23:41.966743] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:40.510 [2024-11-19 18:23:41.966756] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:40.775 [2024-11-19 18:23:41.970470] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:40.775 [2024-11-19 18:23:41.970515] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13e0690 0 00:24:40.775 [2024-11-19 18:23:41.978207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:40.775 [2024-11-19 18:23:41.978224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:40.775 [2024-11-19 18:23:41.978229] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:40.775 [2024-11-19 18:23:41.978233] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:40.775 [2024-11-19 18:23:41.978272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.978278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.978282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.775 [2024-11-19 18:23:41.978301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:40.775 [2024-11-19 18:23:41.978326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.775 [2024-11-19 18:23:41.986172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.775 [2024-11-19 18:23:41.986182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.775 [2024-11-19 18:23:41.986185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.986190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.775 [2024-11-19 18:23:41.986203] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:40.775 [2024-11-19 18:23:41.986211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:40.775 [2024-11-19 18:23:41.986216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:40.775 [2024-11-19 18:23:41.986232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.986237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.986240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.775 [2024-11-19 18:23:41.986248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.775 [2024-11-19 18:23:41.986264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.775 [2024-11-19 18:23:41.986453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.775 [2024-11-19 18:23:41.986459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.775 [2024-11-19 18:23:41.986463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.986467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.775 [2024-11-19 18:23:41.986473] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:40.775 [2024-11-19 18:23:41.986480] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:40.775 [2024-11-19 18:23:41.986488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.986492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.986495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.775 [2024-11-19 18:23:41.986502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.775 [2024-11-19 18:23:41.986513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.775 [2024-11-19 18:23:41.986726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.775 [2024-11-19 18:23:41.986732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.775 [2024-11-19 18:23:41.986736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.986740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.775 [2024-11-19 18:23:41.986745] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:40.775 [2024-11-19 18:23:41.986754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:40.775 [2024-11-19 18:23:41.986761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.986766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.986769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.775 [2024-11-19 18:23:41.986781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.775 [2024-11-19 18:23:41.986791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.775 [2024-11-19 18:23:41.986996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.775 [2024-11-19 18:23:41.987003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.775 [2024-11-19 18:23:41.987006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.987010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.775 [2024-11-19 18:23:41.987015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:40.775 [2024-11-19 18:23:41.987025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.987029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.987033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.775 [2024-11-19 18:23:41.987040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.775 [2024-11-19 18:23:41.987050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.775 [2024-11-19 18:23:41.987264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.775 [2024-11-19 18:23:41.987271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.775 [2024-11-19 18:23:41.987275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.775 [2024-11-19 18:23:41.987279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.775 [2024-11-19 18:23:41.987283] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:40.775 [2024-11-19 18:23:41.987288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:40.776 [2024-11-19 18:23:41.987296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:40.776 [2024-11-19 18:23:41.987405] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:40.776 [2024-11-19 18:23:41.987410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:40.776 [2024-11-19 18:23:41.987418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.987422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.987426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.987433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.776 [2024-11-19 18:23:41.987444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.776 [2024-11-19 18:23:41.987649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.776 [2024-11-19 18:23:41.987655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.776 [2024-11-19 18:23:41.987659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.987663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.776 [2024-11-19 18:23:41.987668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:40.776 [2024-11-19 18:23:41.987678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.987682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.987685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.987695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.776 [2024-11-19 18:23:41.987707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.776 [2024-11-19 18:23:41.987876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.776 [2024-11-19 18:23:41.987883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.776 [2024-11-19 18:23:41.987886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.987890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.776 [2024-11-19 18:23:41.987895] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:40.776 [2024-11-19 18:23:41.987899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:40.776 [2024-11-19 18:23:41.987907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:40.776 [2024-11-19 18:23:41.987916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:40.776 [2024-11-19 18:23:41.987925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.987928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.987935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.776 [2024-11-19 18:23:41.987946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.776 [2024-11-19 18:23:41.988169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.776 [2024-11-19 18:23:41.988176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.776 [2024-11-19 18:23:41.988180] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988184] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e0690): datao=0, datal=4096, cccid=0 00:24:40.776 [2024-11-19 18:23:41.988189] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1442100) on tqpair(0x13e0690): expected_datao=0, payload_size=4096 00:24:40.776 [2024-11-19 18:23:41.988193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988201] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988205] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.776 [2024-11-19 18:23:41.988365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.776 [2024-11-19 18:23:41.988368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.776 [2024-11-19 18:23:41.988380] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:40.776 [2024-11-19 18:23:41.988385] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:40.776 [2024-11-19 18:23:41.988390] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:40.776 [2024-11-19 18:23:41.988401] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:40.776 [2024-11-19 18:23:41.988405] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:40.776 [2024-11-19 18:23:41.988411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:40.776 [2024-11-19 18:23:41.988435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:40.776 [2024-11-19 18:23:41.988442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.988457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:40.776 [2024-11-19 18:23:41.988469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.776 [2024-11-19 18:23:41.988685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.776 [2024-11-19 18:23:41.988693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.776 [2024-11-19 18:23:41.988696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.776 [2024-11-19 18:23:41.988708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.988721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.776 [2024-11-19 18:23:41.988728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.988741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.776 [2024-11-19 18:23:41.988748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.988761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.776 [2024-11-19 18:23:41.988767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.988780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.776 [2024-11-19 18:23:41.988785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:40.776 [2024-11-19 18:23:41.988793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:40.776 [2024-11-19 18:23:41.988800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.988804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.988810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.776 [2024-11-19 18:23:41.988822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442100, cid 0, qid 0 00:24:40.776 [2024-11-19 18:23:41.988828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442280, cid 1, qid 0 00:24:40.776 [2024-11-19 18:23:41.988835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442400, cid 2, qid 0 00:24:40.776 [2024-11-19 18:23:41.988840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442580, cid 3, qid 0 00:24:40.776 [2024-11-19 18:23:41.988845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442700, cid 4, qid 0 00:24:40.776 [2024-11-19 18:23:41.989070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.776 [2024-11-19 18:23:41.989076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.776 [2024-11-19 18:23:41.989079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.989083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442700) on tqpair=0x13e0690 00:24:40.776 [2024-11-19 18:23:41.989091] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:40.776 [2024-11-19 18:23:41.989097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:40.776 [2024-11-19 18:23:41.989105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:40.776 [2024-11-19 18:23:41.989113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:40.776 [2024-11-19 18:23:41.989121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.989125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.776 [2024-11-19 18:23:41.989129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e0690) 00:24:40.776 [2024-11-19 18:23:41.989135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:40.776 [2024-11-19 18:23:41.989146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442700, cid 4, qid 0 00:24:40.777 [2024-11-19 18:23:41.989367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.777 [2024-11-19 18:23:41.989374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.777 [2024-11-19 18:23:41.989377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.989381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442700) on tqpair=0x13e0690 00:24:40.777 [2024-11-19 18:23:41.989450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.989460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.989468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.989472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e0690) 00:24:40.777 [2024-11-19 18:23:41.989478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.777 [2024-11-19 18:23:41.989489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442700, cid 4, qid 0 00:24:40.777 [2024-11-19 18:23:41.989716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.777 [2024-11-19 18:23:41.989723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.777 [2024-11-19 18:23:41.989727] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.989731] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e0690): datao=0, datal=4096, cccid=4 00:24:40.777 [2024-11-19 18:23:41.989735] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1442700) on tqpair(0x13e0690): expected_datao=0, payload_size=4096 00:24:40.777 [2024-11-19 18:23:41.989740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.989775] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.989785] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.989925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.777 [2024-11-19 18:23:41.989932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.777 [2024-11-19 18:23:41.989935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.989939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442700) on tqpair=0x13e0690 00:24:40.777 [2024-11-19 18:23:41.989949] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:40.777 [2024-11-19 18:23:41.989965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.989975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.989982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.989986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e0690) 00:24:40.777 [2024-11-19 18:23:41.989993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.777 [2024-11-19 18:23:41.990004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442700, cid 4, qid 0 00:24:40.777 [2024-11-19 18:23:41.994174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.777 [2024-11-19 18:23:41.994185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.777 [2024-11-19 18:23:41.994188] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994192] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e0690): datao=0, datal=4096, cccid=4 00:24:40.777 [2024-11-19 18:23:41.994198] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1442700) on tqpair(0x13e0690): expected_datao=0, payload_size=4096 00:24:40.777 [2024-11-19 18:23:41.994205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994212] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994216] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.777 [2024-11-19 18:23:41.994228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.777 [2024-11-19 18:23:41.994231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442700) on tqpair=0x13e0690 00:24:40.777 [2024-11-19 18:23:41.994251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.994263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.994271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e0690) 00:24:40.777 [2024-11-19 18:23:41.994284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.777 [2024-11-19 18:23:41.994298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442700, cid 4, qid 0 00:24:40.777 [2024-11-19 18:23:41.994492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.777 [2024-11-19 18:23:41.994500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.777 [2024-11-19 18:23:41.994503] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994507] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e0690): datao=0, datal=4096, cccid=4 00:24:40.777 [2024-11-19 18:23:41.994516] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1442700) on tqpair(0x13e0690): expected_datao=0, payload_size=4096 00:24:40.777 [2024-11-19 18:23:41.994520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994527] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994530] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.777 [2024-11-19 18:23:41.994681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.777 [2024-11-19 18:23:41.994684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442700) on tqpair=0x13e0690 00:24:40.777 [2024-11-19 18:23:41.994696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.994705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.994715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.994722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.994727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.994733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.994740] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:40.777 [2024-11-19 18:23:41.994745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:40.777 [2024-11-19 18:23:41.994751] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:40.777 [2024-11-19 18:23:41.994768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e0690) 00:24:40.777 [2024-11-19 18:23:41.994779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.777 [2024-11-19 18:23:41.994786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.994793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13e0690) 00:24:40.777 [2024-11-19 18:23:41.994800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.777 [2024-11-19 18:23:41.994815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442700, cid 4, qid 0 00:24:40.777 [2024-11-19 18:23:41.994820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442880, cid 5, qid 0 00:24:40.777 [2024-11-19 18:23:41.995052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.777 [2024-11-19 18:23:41.995059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.777 [2024-11-19 18:23:41.995062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.995066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442700) on tqpair=0x13e0690 00:24:40.777 [2024-11-19 18:23:41.995073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.777 [2024-11-19 18:23:41.995079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.777 [2024-11-19 18:23:41.995082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.995090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442880) on tqpair=0x13e0690 00:24:40.777 [2024-11-19 18:23:41.995100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.995105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13e0690) 00:24:40.777 [2024-11-19 18:23:41.995113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.777 [2024-11-19 18:23:41.995125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442880, cid 5, qid 0 00:24:40.777 [2024-11-19 18:23:41.995352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.777 [2024-11-19 18:23:41.995361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.777 [2024-11-19 18:23:41.995364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.995368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442880) on tqpair=0x13e0690 00:24:40.777 [2024-11-19 18:23:41.995379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.777 [2024-11-19 18:23:41.995383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13e0690) 00:24:40.777 [2024-11-19 18:23:41.995390] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.777 [2024-11-19 18:23:41.995400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442880, cid 5, qid 0 00:24:40.777 [2024-11-19 18:23:41.995595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.777 [2024-11-19 18:23:41.995601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.778 [2024-11-19 18:23:41.995605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.995610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442880) on tqpair=0x13e0690 00:24:40.778 [2024-11-19 18:23:41.995619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.995624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13e0690) 00:24:40.778 [2024-11-19 18:23:41.995631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.778 [2024-11-19 18:23:41.995640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442880, cid 5, qid 0 00:24:40.778 [2024-11-19 18:23:41.995855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.778 [2024-11-19 18:23:41.995862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.778 [2024-11-19 18:23:41.995866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.995870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442880) on tqpair=0x13e0690 00:24:40.778 [2024-11-19 18:23:41.995886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.995890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13e0690) 00:24:40.778 [2024-11-19 18:23:41.995897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.778 [2024-11-19 18:23:41.995904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.995908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e0690) 00:24:40.778 [2024-11-19 18:23:41.995914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.778 [2024-11-19 18:23:41.995922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.995926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13e0690) 00:24:40.778 [2024-11-19 18:23:41.995932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.778 [2024-11-19 18:23:41.995942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.995946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13e0690) 00:24:40.778 [2024-11-19 18:23:41.995953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.778 [2024-11-19 18:23:41.995964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442880, cid 5, qid 0 00:24:40.778 [2024-11-19 18:23:41.995969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442700, cid 4, qid 0 00:24:40.778 [2024-11-19 18:23:41.995974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442a00, cid 6, qid 0 00:24:40.778 [2024-11-19 18:23:41.995979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442b80, cid 7, qid 0 00:24:40.778 [2024-11-19 18:23:41.996257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.778 [2024-11-19 18:23:41.996264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.778 [2024-11-19 18:23:41.996268] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996271] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e0690): datao=0, datal=8192, cccid=5 00:24:40.778 [2024-11-19 18:23:41.996276] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1442880) on tqpair(0x13e0690): expected_datao=0, payload_size=8192 00:24:40.778 [2024-11-19 18:23:41.996280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996379] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996384] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.778 [2024-11-19 18:23:41.996395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.778 [2024-11-19 18:23:41.996398] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996402] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e0690): datao=0, datal=512, cccid=4 00:24:40.778 [2024-11-19 18:23:41.996407] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1442700) on tqpair(0x13e0690): expected_datao=0, payload_size=512 00:24:40.778 [2024-11-19 18:23:41.996411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996417] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996421] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.778 [2024-11-19 18:23:41.996432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.778 [2024-11-19 18:23:41.996436] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996439] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e0690): datao=0, datal=512, cccid=6 00:24:40.778 [2024-11-19 18:23:41.996444] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1442a00) on tqpair(0x13e0690): expected_datao=0, payload_size=512 00:24:40.778 [2024-11-19 18:23:41.996448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996458] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:40.778 [2024-11-19 18:23:41.996470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:40.778 [2024-11-19 18:23:41.996473] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996477] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e0690): datao=0, datal=4096, cccid=7 00:24:40.778 [2024-11-19 18:23:41.996481] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1442b80) on tqpair(0x13e0690): expected_datao=0, payload_size=4096 00:24:40.778 [2024-11-19 18:23:41.996491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996498] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996501] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.778 [2024-11-19 18:23:41.996522] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.778 [2024-11-19 18:23:41.996525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442880) on tqpair=0x13e0690 00:24:40.778 [2024-11-19 18:23:41.996543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.778 [2024-11-19 18:23:41.996549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.778 [2024-11-19 18:23:41.996552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442700) on tqpair=0x13e0690 00:24:40.778 [2024-11-19 18:23:41.996568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.778 [2024-11-19 18:23:41.996574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.778 [2024-11-19 18:23:41.996577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442a00) on tqpair=0x13e0690 00:24:40.778 [2024-11-19 18:23:41.996588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.778 [2024-11-19 18:23:41.996594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.778 [2024-11-19 18:23:41.996598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.778 [2024-11-19 18:23:41.996602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442b80) on tqpair=0x13e0690 00:24:40.778 ===================================================== 00:24:40.778 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.778 ===================================================== 00:24:40.778 Controller Capabilities/Features 00:24:40.778 ================================ 00:24:40.778 Vendor ID: 8086 00:24:40.778 Subsystem Vendor ID: 8086 00:24:40.778 Serial Number: SPDK00000000000001 00:24:40.778 Model Number: SPDK bdev Controller 00:24:40.778 Firmware Version: 25.01 00:24:40.778 Recommended Arb Burst: 6 00:24:40.778 IEEE OUI Identifier: e4 d2 5c 00:24:40.778 Multi-path I/O 00:24:40.778 May have multiple subsystem ports: Yes 00:24:40.778 May have multiple controllers: Yes 00:24:40.778 Associated with SR-IOV VF: No 00:24:40.778 Max Data Transfer Size: 131072 00:24:40.778 Max Number of Namespaces: 32 00:24:40.778 Max Number of I/O Queues: 127 00:24:40.778 NVMe Specification Version (VS): 1.3 00:24:40.778 NVMe Specification Version (Identify): 1.3 00:24:40.778 Maximum Queue Entries: 128 00:24:40.778 Contiguous Queues Required: Yes 00:24:40.778 Arbitration Mechanisms Supported 00:24:40.778 Weighted Round Robin: Not Supported 00:24:40.778 Vendor Specific: Not Supported 00:24:40.778 Reset Timeout: 15000 ms 00:24:40.778 Doorbell Stride: 4 bytes 00:24:40.778 NVM Subsystem Reset: Not Supported 00:24:40.778 Command Sets Supported 00:24:40.778 NVM Command Set: Supported 00:24:40.778 Boot Partition: Not Supported 00:24:40.778 Memory Page Size Minimum: 4096 bytes 00:24:40.778 Memory Page Size Maximum: 4096 bytes 00:24:40.778 Persistent Memory Region: Not Supported 00:24:40.778 Optional Asynchronous Events Supported 00:24:40.778 Namespace Attribute Notices: Supported 00:24:40.778 Firmware Activation Notices: Not Supported 00:24:40.778 ANA Change Notices: Not Supported 00:24:40.778 PLE Aggregate Log Change Notices: Not Supported 00:24:40.778 LBA Status Info Alert Notices: Not Supported 00:24:40.778 EGE Aggregate Log Change Notices: Not Supported 00:24:40.778 Normal NVM Subsystem Shutdown event: Not Supported 00:24:40.778 Zone Descriptor Change Notices: Not Supported 00:24:40.778 Discovery Log Change Notices: Not Supported 00:24:40.778 Controller Attributes 00:24:40.778 128-bit Host Identifier: Supported 00:24:40.778 Non-Operational Permissive Mode: Not Supported 00:24:40.778 NVM Sets: Not Supported 00:24:40.778 Read Recovery Levels: Not Supported 00:24:40.778 Endurance Groups: Not Supported 00:24:40.778 Predictable Latency Mode: Not Supported 00:24:40.778 Traffic Based Keep ALive: Not Supported 00:24:40.779 Namespace Granularity: Not Supported 00:24:40.779 SQ Associations: Not Supported 00:24:40.779 UUID List: Not Supported 00:24:40.779 Multi-Domain Subsystem: Not Supported 00:24:40.779 Fixed Capacity Management: Not Supported 00:24:40.779 Variable Capacity Management: Not Supported 00:24:40.779 Delete Endurance Group: Not Supported 00:24:40.779 Delete NVM Set: Not Supported 00:24:40.779 Extended LBA Formats Supported: Not Supported 00:24:40.779 Flexible Data Placement Supported: Not Supported 00:24:40.779 00:24:40.779 Controller Memory Buffer Support 00:24:40.779 ================================ 00:24:40.779 Supported: No 00:24:40.779 00:24:40.779 Persistent Memory Region Support 00:24:40.779 ================================ 00:24:40.779 Supported: No 00:24:40.779 00:24:40.779 Admin Command Set Attributes 00:24:40.779 ============================ 00:24:40.779 Security Send/Receive: Not Supported 00:24:40.779 Format NVM: Not Supported 00:24:40.779 Firmware Activate/Download: Not Supported 00:24:40.779 Namespace Management: Not Supported 00:24:40.779 Device Self-Test: Not Supported 00:24:40.779 Directives: Not Supported 00:24:40.779 NVMe-MI: Not Supported 00:24:40.779 Virtualization Management: Not Supported 00:24:40.779 Doorbell Buffer Config: Not Supported 00:24:40.779 Get LBA Status Capability: Not Supported 00:24:40.779 Command & Feature Lockdown Capability: Not Supported 00:24:40.779 Abort Command Limit: 4 00:24:40.779 Async Event Request Limit: 4 00:24:40.779 Number of Firmware Slots: N/A 00:24:40.779 Firmware Slot 1 Read-Only: N/A 00:24:40.779 Firmware Activation Without Reset: N/A 00:24:40.779 Multiple Update Detection Support: N/A 00:24:40.779 Firmware Update Granularity: No Information Provided 00:24:40.779 Per-Namespace SMART Log: No 00:24:40.779 Asymmetric Namespace Access Log Page: Not Supported 00:24:40.779 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:40.779 Command Effects Log Page: Supported 00:24:40.779 Get Log Page Extended Data: Supported 00:24:40.779 Telemetry Log Pages: Not Supported 00:24:40.779 Persistent Event Log Pages: Not Supported 00:24:40.779 Supported Log Pages Log Page: May Support 00:24:40.779 Commands Supported & Effects Log Page: Not Supported 00:24:40.779 Feature Identifiers & Effects Log Page:May Support 00:24:40.779 NVMe-MI Commands & Effects Log Page: May Support 00:24:40.779 Data Area 4 for Telemetry Log: Not Supported 00:24:40.779 Error Log Page Entries Supported: 128 00:24:40.779 Keep Alive: Supported 00:24:40.779 Keep Alive Granularity: 10000 ms 00:24:40.779 00:24:40.779 NVM Command Set Attributes 00:24:40.779 ========================== 00:24:40.779 Submission Queue Entry Size 00:24:40.779 Max: 64 00:24:40.779 Min: 64 00:24:40.779 Completion Queue Entry Size 00:24:40.779 Max: 16 00:24:40.779 Min: 16 00:24:40.779 Number of Namespaces: 32 00:24:40.779 Compare Command: Supported 00:24:40.779 Write Uncorrectable Command: Not Supported 00:24:40.779 Dataset Management Command: Supported 00:24:40.779 Write Zeroes Command: Supported 00:24:40.779 Set Features Save Field: Not Supported 00:24:40.779 Reservations: Supported 00:24:40.779 Timestamp: Not Supported 00:24:40.779 Copy: Supported 00:24:40.779 Volatile Write Cache: Present 00:24:40.779 Atomic Write Unit (Normal): 1 00:24:40.779 Atomic Write Unit (PFail): 1 00:24:40.779 Atomic Compare & Write Unit: 1 00:24:40.779 Fused Compare & Write: Supported 00:24:40.779 Scatter-Gather List 00:24:40.779 SGL Command Set: Supported 00:24:40.779 SGL Keyed: Supported 00:24:40.779 SGL Bit Bucket Descriptor: Not Supported 00:24:40.779 SGL Metadata Pointer: Not Supported 00:24:40.779 Oversized SGL: Not Supported 00:24:40.779 SGL Metadata Address: Not Supported 00:24:40.779 SGL Offset: Supported 00:24:40.779 Transport SGL Data Block: Not Supported 00:24:40.779 Replay Protected Memory Block: Not Supported 00:24:40.779 00:24:40.779 Firmware Slot Information 00:24:40.779 ========================= 00:24:40.779 Active slot: 1 00:24:40.779 Slot 1 Firmware Revision: 25.01 00:24:40.779 00:24:40.779 00:24:40.779 Commands Supported and Effects 00:24:40.779 ============================== 00:24:40.779 Admin Commands 00:24:40.779 -------------- 00:24:40.779 Get Log Page (02h): Supported 00:24:40.779 Identify (06h): Supported 00:24:40.779 Abort (08h): Supported 00:24:40.779 Set Features (09h): Supported 00:24:40.779 Get Features (0Ah): Supported 00:24:40.779 Asynchronous Event Request (0Ch): Supported 00:24:40.779 Keep Alive (18h): Supported 00:24:40.779 I/O Commands 00:24:40.779 ------------ 00:24:40.779 Flush (00h): Supported LBA-Change 00:24:40.779 Write (01h): Supported LBA-Change 00:24:40.779 Read (02h): Supported 00:24:40.779 Compare (05h): Supported 00:24:40.779 Write Zeroes (08h): Supported LBA-Change 00:24:40.779 Dataset Management (09h): Supported LBA-Change 00:24:40.779 Copy (19h): Supported LBA-Change 00:24:40.779 00:24:40.779 Error Log 00:24:40.779 ========= 00:24:40.779 00:24:40.779 Arbitration 00:24:40.779 =========== 00:24:40.779 Arbitration Burst: 1 00:24:40.779 00:24:40.779 Power Management 00:24:40.779 ================ 00:24:40.779 Number of Power States: 1 00:24:40.779 Current Power State: Power State #0 00:24:40.779 Power State #0: 00:24:40.779 Max Power: 0.00 W 00:24:40.779 Non-Operational State: Operational 00:24:40.779 Entry Latency: Not Reported 00:24:40.779 Exit Latency: Not Reported 00:24:40.779 Relative Read Throughput: 0 00:24:40.779 Relative Read Latency: 0 00:24:40.779 Relative Write Throughput: 0 00:24:40.779 Relative Write Latency: 0 00:24:40.779 Idle Power: Not Reported 00:24:40.779 Active Power: Not Reported 00:24:40.779 Non-Operational Permissive Mode: Not Supported 00:24:40.779 00:24:40.779 Health Information 00:24:40.779 ================== 00:24:40.779 Critical Warnings: 00:24:40.779 Available Spare Space: OK 00:24:40.779 Temperature: OK 00:24:40.779 Device Reliability: OK 00:24:40.779 Read Only: No 00:24:40.779 Volatile Memory Backup: OK 00:24:40.779 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:40.779 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:40.779 Available Spare: 0% 00:24:40.779 Available Spare Threshold: 0% 00:24:40.779 Life Percentage Used:[2024-11-19 18:23:41.996707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.779 [2024-11-19 18:23:41.996712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13e0690) 00:24:40.779 [2024-11-19 18:23:41.996719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.779 [2024-11-19 18:23:41.996731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442b80, cid 7, qid 0 00:24:40.779 [2024-11-19 18:23:41.996939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.779 [2024-11-19 18:23:41.996946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.779 [2024-11-19 18:23:41.996949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.779 [2024-11-19 18:23:41.996953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442b80) on tqpair=0x13e0690 00:24:40.779 [2024-11-19 18:23:41.996987] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:40.779 [2024-11-19 18:23:41.996997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442100) on tqpair=0x13e0690 00:24:40.779 [2024-11-19 18:23:41.997003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.779 [2024-11-19 18:23:41.997009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442280) on tqpair=0x13e0690 00:24:40.779 [2024-11-19 18:23:41.997014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.779 [2024-11-19 18:23:41.997019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442400) on tqpair=0x13e0690 00:24:40.779 [2024-11-19 18:23:41.997024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.779 [2024-11-19 18:23:41.997029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442580) on tqpair=0x13e0690 00:24:40.779 [2024-11-19 18:23:41.997033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.779 [2024-11-19 18:23:41.997045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.779 [2024-11-19 18:23:41.997049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.779 [2024-11-19 18:23:41.997052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e0690) 00:24:40.779 [2024-11-19 18:23:41.997059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.780 [2024-11-19 18:23:41.997071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442580, cid 3, qid 0 00:24:40.780 [2024-11-19 18:23:41.997286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.780 [2024-11-19 18:23:41.997293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.780 [2024-11-19 18:23:41.997296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.997300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442580) on tqpair=0x13e0690 00:24:40.780 [2024-11-19 18:23:41.997307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.997311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.997315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e0690) 00:24:40.780 [2024-11-19 18:23:41.997321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.780 [2024-11-19 18:23:41.997335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442580, cid 3, qid 0 00:24:40.780 [2024-11-19 18:23:41.997547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.780 [2024-11-19 18:23:41.997554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.780 [2024-11-19 18:23:41.997557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.997561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442580) on tqpair=0x13e0690 00:24:40.780 [2024-11-19 18:23:41.997566] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:40.780 [2024-11-19 18:23:41.997571] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:40.780 [2024-11-19 18:23:41.997580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.997584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.997588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e0690) 00:24:40.780 [2024-11-19 18:23:41.997595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.780 [2024-11-19 18:23:41.997605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442580, cid 3, qid 0 00:24:40.780 [2024-11-19 18:23:41.997793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.780 [2024-11-19 18:23:41.997799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.780 [2024-11-19 18:23:41.997802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.997806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442580) on tqpair=0x13e0690 00:24:40.780 [2024-11-19 18:23:41.997816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.997820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.997824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e0690) 00:24:40.780 [2024-11-19 18:23:41.997831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.780 [2024-11-19 18:23:41.997841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442580, cid 3, qid 0 00:24:40.780 [2024-11-19 18:23:41.998048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.780 [2024-11-19 18:23:41.998054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.780 [2024-11-19 18:23:41.998062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.998067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442580) on tqpair=0x13e0690 00:24:40.780 [2024-11-19 18:23:41.998076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.998080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:41.998084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e0690) 00:24:40.780 [2024-11-19 18:23:41.998091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.780 [2024-11-19 18:23:41.998101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1442580, cid 3, qid 0 00:24:40.780 [2024-11-19 18:23:42.002170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:40.780 [2024-11-19 18:23:42.002179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:40.780 [2024-11-19 18:23:42.002183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:40.780 [2024-11-19 18:23:42.002187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1442580) on tqpair=0x13e0690 00:24:40.780 [2024-11-19 18:23:42.002195] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:40.780 0% 00:24:40.780 Data Units Read: 0 00:24:40.780 Data Units Written: 0 00:24:40.780 Host Read Commands: 0 00:24:40.780 Host Write Commands: 0 00:24:40.780 Controller Busy Time: 0 minutes 00:24:40.780 Power Cycles: 0 00:24:40.780 Power On Hours: 0 hours 00:24:40.780 Unsafe Shutdowns: 0 00:24:40.780 Unrecoverable Media Errors: 0 00:24:40.780 Lifetime Error Log Entries: 0 00:24:40.780 Warning Temperature Time: 0 minutes 00:24:40.780 Critical Temperature Time: 0 minutes 00:24:40.780 00:24:40.780 Number of Queues 00:24:40.780 ================ 00:24:40.780 Number of I/O Submission Queues: 127 00:24:40.780 Number of I/O Completion Queues: 127 00:24:40.780 00:24:40.780 Active Namespaces 00:24:40.780 ================= 00:24:40.780 Namespace ID:1 00:24:40.780 Error Recovery Timeout: Unlimited 00:24:40.780 Command Set Identifier: NVM (00h) 00:24:40.780 Deallocate: Supported 00:24:40.780 Deallocated/Unwritten Error: Not Supported 00:24:40.780 Deallocated Read Value: Unknown 00:24:40.780 Deallocate in Write Zeroes: Not Supported 00:24:40.780 Deallocated Guard Field: 0xFFFF 00:24:40.780 Flush: Supported 00:24:40.780 Reservation: Supported 00:24:40.780 Namespace Sharing Capabilities: Multiple Controllers 00:24:40.780 Size (in LBAs): 131072 (0GiB) 00:24:40.780 Capacity (in LBAs): 131072 (0GiB) 00:24:40.780 Utilization (in LBAs): 131072 (0GiB) 00:24:40.780 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:40.780 EUI64: ABCDEF0123456789 00:24:40.780 UUID: ef6bf936-173e-43f1-8bdf-fa372e40da02 00:24:40.780 Thin Provisioning: Not Supported 00:24:40.780 Per-NS Atomic Units: Yes 00:24:40.780 Atomic Boundary Size (Normal): 0 00:24:40.780 Atomic Boundary Size (PFail): 0 00:24:40.780 Atomic Boundary Offset: 0 00:24:40.780 Maximum Single Source Range Length: 65535 00:24:40.780 Maximum Copy Length: 65535 00:24:40.780 Maximum Source Range Count: 1 00:24:40.780 NGUID/EUI64 Never Reused: No 00:24:40.780 Namespace Write Protected: No 00:24:40.780 Number of LBA Formats: 1 00:24:40.780 Current LBA Format: LBA Format #00 00:24:40.780 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:40.780 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.780 rmmod nvme_tcp 00:24:40.780 rmmod nvme_fabrics 00:24:40.780 rmmod nvme_keyring 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2082503 ']' 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2082503 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2082503 ']' 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2082503 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2082503 00:24:40.780 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.781 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.781 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2082503' 00:24:40.781 killing process with pid 2082503 00:24:40.781 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2082503 00:24:40.781 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2082503 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.041 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.042 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.042 18:23:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:43.589 00:24:43.589 real 0m11.589s 00:24:43.589 user 0m8.398s 00:24:43.589 sys 0m6.094s 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:43.589 ************************************ 00:24:43.589 END TEST nvmf_identify 00:24:43.589 ************************************ 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.589 ************************************ 00:24:43.589 START TEST nvmf_perf 00:24:43.589 ************************************ 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:43.589 * Looking for test storage... 00:24:43.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:43.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.589 --rc genhtml_branch_coverage=1 00:24:43.589 --rc genhtml_function_coverage=1 00:24:43.589 --rc genhtml_legend=1 00:24:43.589 --rc geninfo_all_blocks=1 00:24:43.589 --rc geninfo_unexecuted_blocks=1 00:24:43.589 00:24:43.589 ' 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:43.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.589 --rc genhtml_branch_coverage=1 00:24:43.589 --rc genhtml_function_coverage=1 00:24:43.589 --rc genhtml_legend=1 00:24:43.589 --rc geninfo_all_blocks=1 00:24:43.589 --rc geninfo_unexecuted_blocks=1 00:24:43.589 00:24:43.589 ' 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:43.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.589 --rc genhtml_branch_coverage=1 00:24:43.589 --rc genhtml_function_coverage=1 00:24:43.589 --rc genhtml_legend=1 00:24:43.589 --rc geninfo_all_blocks=1 00:24:43.589 --rc geninfo_unexecuted_blocks=1 00:24:43.589 00:24:43.589 ' 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:43.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.589 --rc genhtml_branch_coverage=1 00:24:43.589 --rc genhtml_function_coverage=1 00:24:43.589 --rc genhtml_legend=1 00:24:43.589 --rc geninfo_all_blocks=1 00:24:43.589 --rc geninfo_unexecuted_blocks=1 00:24:43.589 00:24:43.589 ' 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.589 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.590 18:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.733 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:51.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:51.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:51.734 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:51.734 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.734 18:23:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:24:51.734 00:24:51.734 --- 10.0.0.2 ping statistics --- 00:24:51.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.734 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:24:51.734 00:24:51.734 --- 10.0.0.1 ping statistics --- 00:24:51.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.734 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2087152 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2087152 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2087152 ']' 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.734 18:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:51.734 [2024-11-19 18:23:52.355568] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:24:51.734 [2024-11-19 18:23:52.355636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.734 [2024-11-19 18:23:52.455884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:51.734 [2024-11-19 18:23:52.508786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.734 [2024-11-19 18:23:52.508841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.734 [2024-11-19 18:23:52.508850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.735 [2024-11-19 18:23:52.508858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.735 [2024-11-19 18:23:52.508864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.735 [2024-11-19 18:23:52.510929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.735 [2024-11-19 18:23:52.511088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.735 [2024-11-19 18:23:52.511247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.735 [2024-11-19 18:23:52.511248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.735 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.735 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:51.735 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:51.735 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:51.735 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:51.996 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.996 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:51.996 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:52.568 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:52.568 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:52.568 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:52.568 18:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:52.828 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:52.828 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:52.828 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:52.828 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:52.828 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:53.088 [2024-11-19 18:23:54.342190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.088 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.348 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:53.348 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.348 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:53.348 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:53.607 18:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.867 [2024-11-19 18:23:55.097073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.867 18:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:53.867 18:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:53.867 18:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:53.867 18:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:53.867 18:23:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:55.250 Initializing NVMe Controllers 00:24:55.250 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:55.250 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:55.250 Initialization complete. Launching workers. 00:24:55.250 ======================================================== 00:24:55.250 Latency(us) 00:24:55.250 Device Information : IOPS MiB/s Average min max 00:24:55.250 PCIE (0000:65:00.0) NSID 1 from core 0: 78688.91 307.38 406.01 13.20 6211.09 00:24:55.250 ======================================================== 00:24:55.250 Total : 78688.91 307.38 406.01 13.20 6211.09 00:24:55.250 00:24:55.250 18:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:56.636 Initializing NVMe Controllers 00:24:56.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:56.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:56.636 Initialization complete. Launching workers. 00:24:56.636 ======================================================== 00:24:56.636 Latency(us) 00:24:56.636 Device Information : IOPS MiB/s Average min max 00:24:56.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.00 0.29 13741.73 263.02 44969.72 00:24:56.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16501.53 6983.36 47889.21 00:24:56.636 ======================================================== 00:24:56.636 Total : 135.00 0.53 14988.75 263.02 47889.21 00:24:56.636 00:24:56.636 18:23:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:58.021 Initializing NVMe Controllers 00:24:58.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:58.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:58.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:58.021 Initialization complete. Launching workers. 00:24:58.021 ======================================================== 00:24:58.021 Latency(us) 00:24:58.021 Device Information : IOPS MiB/s Average min max 00:24:58.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11707.99 45.73 2740.83 423.77 6348.26 00:24:58.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3840.00 15.00 8371.66 6368.59 15868.27 00:24:58.021 ======================================================== 00:24:58.021 Total : 15547.98 60.73 4131.52 423.77 15868.27 00:24:58.021 00:24:58.021 18:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:58.021 18:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:58.021 18:23:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.569 Initializing NVMe Controllers 00:25:00.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:00.569 Controller IO queue size 128, less than required. 00:25:00.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.569 Controller IO queue size 128, less than required. 00:25:00.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.569 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:00.569 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:00.569 Initialization complete. Launching workers. 00:25:00.569 ======================================================== 00:25:00.569 Latency(us) 00:25:00.569 Device Information : IOPS MiB/s Average min max 00:25:00.569 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1870.96 467.74 69229.46 35139.29 125962.86 00:25:00.569 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 607.99 152.00 221055.16 64053.38 334433.33 00:25:00.569 ======================================================== 00:25:00.569 Total : 2478.94 619.74 106466.26 35139.29 334433.33 00:25:00.569 00:25:00.569 18:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:00.569 No valid NVMe controllers or AIO or URING devices found 00:25:00.569 Initializing NVMe Controllers 00:25:00.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:00.569 Controller IO queue size 128, less than required. 00:25:00.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:00.569 Controller IO queue size 128, less than required. 00:25:00.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:00.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:00.569 WARNING: Some requested NVMe devices were skipped 00:25:00.569 18:24:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:03.119 Initializing NVMe Controllers 00:25:03.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:03.119 Controller IO queue size 128, less than required. 00:25:03.119 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:03.119 Controller IO queue size 128, less than required. 00:25:03.119 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:03.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:03.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:03.119 Initialization complete. Launching workers. 00:25:03.119 00:25:03.119 ==================== 00:25:03.119 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:03.119 TCP transport: 00:25:03.119 polls: 62073 00:25:03.119 idle_polls: 45212 00:25:03.119 sock_completions: 16861 00:25:03.119 nvme_completions: 7499 00:25:03.119 submitted_requests: 11208 00:25:03.119 queued_requests: 1 00:25:03.119 00:25:03.119 ==================== 00:25:03.119 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:03.119 TCP transport: 00:25:03.119 polls: 39844 00:25:03.119 idle_polls: 24569 00:25:03.119 sock_completions: 15275 00:25:03.119 nvme_completions: 7107 00:25:03.119 submitted_requests: 10674 00:25:03.119 queued_requests: 1 00:25:03.119 ======================================================== 00:25:03.119 Latency(us) 00:25:03.119 Device Information : IOPS MiB/s Average min max 00:25:03.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1872.00 468.00 70463.80 37269.30 130776.93 00:25:03.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1774.13 443.53 72152.12 36112.17 135429.09 00:25:03.119 ======================================================== 00:25:03.119 Total : 3646.13 911.53 71285.30 36112.17 135429.09 00:25:03.119 00:25:03.119 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:03.119 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.381 rmmod nvme_tcp 00:25:03.381 rmmod nvme_fabrics 00:25:03.381 rmmod nvme_keyring 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2087152 ']' 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2087152 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2087152 ']' 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2087152 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2087152 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2087152' 00:25:03.381 killing process with pid 2087152 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2087152 00:25:03.381 18:24:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2087152 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.312 18:24:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.858 00:25:07.858 real 0m24.280s 00:25:07.858 user 0m58.409s 00:25:07.858 sys 0m8.627s 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:07.858 ************************************ 00:25:07.858 END TEST nvmf_perf 00:25:07.858 ************************************ 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.858 ************************************ 00:25:07.858 START TEST nvmf_fio_host 00:25:07.858 ************************************ 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:07.858 * Looking for test storage... 00:25:07.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:07.858 18:24:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.858 --rc genhtml_branch_coverage=1 00:25:07.858 --rc genhtml_function_coverage=1 00:25:07.858 --rc genhtml_legend=1 00:25:07.858 --rc geninfo_all_blocks=1 00:25:07.858 --rc geninfo_unexecuted_blocks=1 00:25:07.858 00:25:07.858 ' 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.858 --rc genhtml_branch_coverage=1 00:25:07.858 --rc genhtml_function_coverage=1 00:25:07.858 --rc genhtml_legend=1 00:25:07.858 --rc geninfo_all_blocks=1 00:25:07.858 --rc geninfo_unexecuted_blocks=1 00:25:07.858 00:25:07.858 ' 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.858 --rc genhtml_branch_coverage=1 00:25:07.858 --rc genhtml_function_coverage=1 00:25:07.858 --rc genhtml_legend=1 00:25:07.858 --rc geninfo_all_blocks=1 00:25:07.858 --rc geninfo_unexecuted_blocks=1 00:25:07.858 00:25:07.858 ' 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:07.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.858 --rc genhtml_branch_coverage=1 00:25:07.858 --rc genhtml_function_coverage=1 00:25:07.858 --rc genhtml_legend=1 00:25:07.858 --rc geninfo_all_blocks=1 00:25:07.858 --rc geninfo_unexecuted_blocks=1 00:25:07.858 00:25:07.858 ' 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.858 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.859 18:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.002 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.002 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.002 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.002 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.002 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.002 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.002 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.002 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.002 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:16.003 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:16.003 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:16.003 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:16.003 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:25:16.003 00:25:16.003 --- 10.0.0.2 ping statistics --- 00:25:16.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.003 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:25:16.003 00:25:16.003 --- 10.0.0.1 ping statistics --- 00:25:16.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.003 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.003 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2094523 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2094523 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2094523 ']' 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.004 18:24:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.004 [2024-11-19 18:24:16.715719] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:25:16.004 [2024-11-19 18:24:16.715790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.004 [2024-11-19 18:24:16.814976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.004 [2024-11-19 18:24:16.869013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.004 [2024-11-19 18:24:16.869068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.004 [2024-11-19 18:24:16.869077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.004 [2024-11-19 18:24:16.869085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.004 [2024-11-19 18:24:16.869091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.004 [2024-11-19 18:24:16.871551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.004 [2024-11-19 18:24:16.871680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.004 [2024-11-19 18:24:16.871827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.004 [2024-11-19 18:24:16.871827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.265 18:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.265 18:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:16.265 18:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:16.265 [2024-11-19 18:24:17.704698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.526 18:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:16.526 18:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:16.526 18:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.526 18:24:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:16.526 Malloc1 00:25:16.788 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:16.788 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:17.049 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.311 [2024-11-19 18:24:18.571995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.311 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:17.573 18:24:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:17.835 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:17.835 fio-3.35 00:25:17.835 Starting 1 thread 00:25:20.397 00:25:20.397 test: (groupid=0, jobs=1): err= 0: pid=2095335: Tue Nov 19 18:24:21 2024 00:25:20.397 read: IOPS=13.1k, BW=51.1MiB/s (53.6MB/s)(102MiB/2005msec) 00:25:20.397 slat (usec): min=2, max=288, avg= 2.15, stdev= 2.50 00:25:20.397 clat (usec): min=3822, max=9249, avg=5372.77, stdev=786.42 00:25:20.397 lat (usec): min=3860, max=9251, avg=5374.93, stdev=786.53 00:25:20.397 clat percentiles (usec): 00:25:20.397 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:25:20.397 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5276], 00:25:20.397 | 70.00th=[ 5407], 80.00th=[ 5538], 90.00th=[ 5997], 95.00th=[ 7504], 00:25:20.397 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[ 8848], 99.95th=[ 8979], 00:25:20.397 | 99.99th=[ 9110] 00:25:20.397 bw ( KiB/s): min=43640, max=55720, per=100.00%, avg=52340.00, stdev=5813.23, samples=4 00:25:20.397 iops : min=10910, max=13930, avg=13085.00, stdev=1453.31, samples=4 00:25:20.397 write: IOPS=13.1k, BW=51.1MiB/s (53.6MB/s)(102MiB/2005msec); 0 zone resets 00:25:20.397 slat (usec): min=2, max=271, avg= 2.24, stdev= 1.85 00:25:20.397 clat (usec): min=2921, max=8412, avg=4351.80, stdev=665.42 00:25:20.397 lat (usec): min=2939, max=8414, avg=4354.04, stdev=665.56 00:25:20.397 clat percentiles (usec): 00:25:20.397 | 1.00th=[ 3490], 5.00th=[ 3720], 10.00th=[ 3818], 20.00th=[ 3949], 00:25:20.397 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4293], 00:25:20.397 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 6194], 00:25:20.397 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7242], 99.95th=[ 7570], 00:25:20.397 | 99.99th=[ 8029] 00:25:20.397 bw ( KiB/s): min=43936, max=55592, per=100.00%, avg=52348.00, stdev=5638.81, samples=4 00:25:20.397 iops : min=10984, max=13898, avg=13087.00, stdev=1409.70, samples=4 00:25:20.397 lat (msec) : 4=12.62%, 10=87.38% 00:25:20.397 cpu : usr=71.21%, sys=27.35%, ctx=42, majf=0, minf=17 00:25:20.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:20.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:20.398 issued rwts: total=26233,26238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:20.398 00:25:20.398 Run status group 0 (all jobs): 00:25:20.398 READ: bw=51.1MiB/s (53.6MB/s), 51.1MiB/s-51.1MiB/s (53.6MB/s-53.6MB/s), io=102MiB (107MB), run=2005-2005msec 00:25:20.398 WRITE: bw=51.1MiB/s (53.6MB/s), 51.1MiB/s-51.1MiB/s (53.6MB/s-53.6MB/s), io=102MiB (107MB), run=2005-2005msec 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:20.398 18:24:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:20.659 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:20.659 fio-3.35 00:25:20.659 Starting 1 thread 00:25:23.198 00:25:23.198 test: (groupid=0, jobs=1): err= 0: pid=2095928: Tue Nov 19 18:24:24 2024 00:25:23.198 read: IOPS=9558, BW=149MiB/s (157MB/s)(299MiB/2003msec) 00:25:23.198 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.59 00:25:23.198 clat (usec): min=1930, max=14278, avg=8188.12, stdev=1904.33 00:25:23.198 lat (usec): min=1934, max=14282, avg=8191.72, stdev=1904.45 00:25:23.198 clat percentiles (usec): 00:25:23.198 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6456], 00:25:23.198 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8586], 00:25:23.198 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:25:23.198 | 99.00th=[12649], 99.50th=[13173], 99.90th=[13698], 99.95th=[13960], 00:25:23.198 | 99.99th=[14222] 00:25:23.198 bw ( KiB/s): min=66400, max=81312, per=49.06%, avg=75032.00, stdev=6237.75, samples=4 00:25:23.198 iops : min= 4150, max= 5082, avg=4689.50, stdev=389.86, samples=4 00:25:23.198 write: IOPS=5550, BW=86.7MiB/s (90.9MB/s)(154MiB/1771msec); 0 zone resets 00:25:23.198 slat (usec): min=39, max=338, avg=40.85, stdev= 7.03 00:25:23.198 clat (usec): min=1951, max=14562, avg=9124.41, stdev=1315.06 00:25:23.198 lat (usec): min=1991, max=14700, avg=9165.26, stdev=1316.49 00:25:23.198 clat percentiles (usec): 00:25:23.198 | 1.00th=[ 6259], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8094], 00:25:23.198 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:25:23.198 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:25:23.198 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13960], 99.95th=[14222], 00:25:23.198 | 99.99th=[14615] 00:25:23.198 bw ( KiB/s): min=69888, max=85312, per=88.06%, avg=78208.00, stdev=6338.48, samples=4 00:25:23.198 iops : min= 4368, max= 5332, avg=4888.00, stdev=396.15, samples=4 00:25:23.198 lat (msec) : 2=0.01%, 4=0.55%, 10=78.52%, 20=20.92% 00:25:23.198 cpu : usr=84.77%, sys=13.79%, ctx=18, majf=0, minf=29 00:25:23.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:23.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:23.198 issued rwts: total=19146,9830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:23.198 00:25:23.198 Run status group 0 (all jobs): 00:25:23.198 READ: bw=149MiB/s (157MB/s), 149MiB/s-149MiB/s (157MB/s-157MB/s), io=299MiB (314MB), run=2003-2003msec 00:25:23.198 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=154MiB (161MB), run=1771-1771msec 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:23.198 rmmod nvme_tcp 00:25:23.198 rmmod nvme_fabrics 00:25:23.198 rmmod nvme_keyring 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2094523 ']' 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2094523 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2094523 ']' 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2094523 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2094523 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2094523' 00:25:23.198 killing process with pid 2094523 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2094523 00:25:23.198 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2094523 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.459 18:24:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.370 18:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.370 00:25:25.370 real 0m17.919s 00:25:25.370 user 1m1.509s 00:25:25.370 sys 0m7.723s 00:25:25.370 18:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.370 18:24:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.370 ************************************ 00:25:25.370 END TEST nvmf_fio_host 00:25:25.370 ************************************ 00:25:25.631 18:24:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:25.631 18:24:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:25.631 18:24:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.631 18:24:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.631 ************************************ 00:25:25.631 START TEST nvmf_failover 00:25:25.631 ************************************ 00:25:25.631 18:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:25.631 * Looking for test storage... 00:25:25.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:25.631 18:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:25.631 18:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:25.631 18:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.631 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:25.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.632 --rc genhtml_branch_coverage=1 00:25:25.632 --rc genhtml_function_coverage=1 00:25:25.632 --rc genhtml_legend=1 00:25:25.632 --rc geninfo_all_blocks=1 00:25:25.632 --rc geninfo_unexecuted_blocks=1 00:25:25.632 00:25:25.632 ' 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:25.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.632 --rc genhtml_branch_coverage=1 00:25:25.632 --rc genhtml_function_coverage=1 00:25:25.632 --rc genhtml_legend=1 00:25:25.632 --rc geninfo_all_blocks=1 00:25:25.632 --rc geninfo_unexecuted_blocks=1 00:25:25.632 00:25:25.632 ' 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:25.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.632 --rc genhtml_branch_coverage=1 00:25:25.632 --rc genhtml_function_coverage=1 00:25:25.632 --rc genhtml_legend=1 00:25:25.632 --rc geninfo_all_blocks=1 00:25:25.632 --rc geninfo_unexecuted_blocks=1 00:25:25.632 00:25:25.632 ' 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:25.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.632 --rc genhtml_branch_coverage=1 00:25:25.632 --rc genhtml_function_coverage=1 00:25:25.632 --rc genhtml_legend=1 00:25:25.632 --rc geninfo_all_blocks=1 00:25:25.632 --rc geninfo_unexecuted_blocks=1 00:25:25.632 00:25:25.632 ' 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.632 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:25.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:25.893 18:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:34.035 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:34.036 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:34.036 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:34.036 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:34.036 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:34.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:25:34.036 00:25:34.036 --- 10.0.0.2 ping statistics --- 00:25:34.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.036 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:25:34.036 00:25:34.036 --- 10.0.0.1 ping statistics --- 00:25:34.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.036 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.036 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2100580 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2100580 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2100580 ']' 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.037 18:24:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:34.037 [2024-11-19 18:24:34.723020] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:25:34.037 [2024-11-19 18:24:34.723085] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.037 [2024-11-19 18:24:34.822328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:34.037 [2024-11-19 18:24:34.874462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.037 [2024-11-19 18:24:34.874515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.037 [2024-11-19 18:24:34.874523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.037 [2024-11-19 18:24:34.874530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.037 [2024-11-19 18:24:34.874536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.037 [2024-11-19 18:24:34.876599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.037 [2024-11-19 18:24:34.876760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.037 [2024-11-19 18:24:34.876761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.299 18:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.299 18:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:34.299 18:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:34.299 18:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:34.299 18:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:34.299 18:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.299 18:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:34.299 [2024-11-19 18:24:35.756197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.560 18:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:34.560 Malloc0 00:25:34.560 18:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.821 18:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:35.082 18:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.343 [2024-11-19 18:24:36.586821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.343 18:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:35.343 [2024-11-19 18:24:36.787349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:35.604 18:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:35.604 [2024-11-19 18:24:36.971919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2101172 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2101172 /var/tmp/bdevperf.sock 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2101172 ']' 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:35.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.604 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:36.544 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:36.544 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:36.544 18:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:36.805 NVMe0n1 00:25:36.805 18:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:37.065 00:25:37.065 18:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2101319 00:25:37.065 18:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:37.065 18:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:38.007 18:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.267 18:24:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:41.562 18:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:41.562 00:25:41.562 18:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:41.821 [2024-11-19 18:24:43.088632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 [2024-11-19 18:24:43.088816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9040 is same with the state(6) to be set 00:25:41.822 18:24:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:45.115 18:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.115 [2024-11-19 18:24:46.274590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.115 18:24:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:46.056 18:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:46.056 18:24:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2101319 00:25:52.647 { 00:25:52.647 "results": [ 00:25:52.647 { 00:25:52.647 "job": "NVMe0n1", 00:25:52.647 "core_mask": "0x1", 00:25:52.647 "workload": "verify", 00:25:52.647 "status": "finished", 00:25:52.647 "verify_range": { 00:25:52.647 "start": 0, 00:25:52.647 "length": 16384 00:25:52.647 }, 00:25:52.647 "queue_depth": 128, 00:25:52.647 "io_size": 4096, 00:25:52.647 "runtime": 15.005367, 00:25:52.647 "iops": 12544.378288115178, 00:25:52.647 "mibps": 49.001477687949915, 00:25:52.647 "io_failed": 8477, 00:25:52.647 "io_timeout": 0, 00:25:52.647 "avg_latency_us": 9743.316914713707, 00:25:52.647 "min_latency_us": 512.0, 00:25:52.647 "max_latency_us": 29272.746666666666 00:25:52.647 } 00:25:52.647 ], 00:25:52.647 "core_count": 1 00:25:52.647 } 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2101172 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2101172 ']' 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2101172 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2101172 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2101172' 00:25:52.647 killing process with pid 2101172 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2101172 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2101172 00:25:52.647 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:52.647 [2024-11-19 18:24:37.040629] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:25:52.647 [2024-11-19 18:24:37.040688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101172 ] 00:25:52.647 [2024-11-19 18:24:37.127839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.647 [2024-11-19 18:24:37.163719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.647 Running I/O for 15 seconds... 00:25:52.647 11314.00 IOPS, 44.20 MiB/s [2024-11-19T17:24:54.118Z] [2024-11-19 18:24:39.511034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.647 [2024-11-19 18:24:39.511079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.647 [2024-11-19 18:24:39.511095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.647 [2024-11-19 18:24:39.511103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.647 [2024-11-19 18:24:39.511113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.647 [2024-11-19 18:24:39.511121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.647 [2024-11-19 18:24:39.511131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.648 [2024-11-19 18:24:39.511821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.648 [2024-11-19 18:24:39.511828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.511845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.511863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.511879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.511895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.511912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.511929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.511951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.511967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.511984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.511994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.649 [2024-11-19 18:24:39.512432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.649 [2024-11-19 18:24:39.512509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.649 [2024-11-19 18:24:39.512517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.512989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.512996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.650 [2024-11-19 18:24:39.513184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.650 [2024-11-19 18:24:39.513194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.651 [2024-11-19 18:24:39.513202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.651 [2024-11-19 18:24:39.513219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.651 [2024-11-19 18:24:39.513236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.651 [2024-11-19 18:24:39.513254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.651 [2024-11-19 18:24:39.513270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cdf00 is same with the state(6) to be set 00:25:52.651 [2024-11-19 18:24:39.513288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.651 [2024-11-19 18:24:39.513294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.651 [2024-11-19 18:24:39.513301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 00:25:52.651 [2024-11-19 18:24:39.513309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513349] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:52.651 [2024-11-19 18:24:39.513374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.651 [2024-11-19 18:24:39.513383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.651 [2024-11-19 18:24:39.513399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.651 [2024-11-19 18:24:39.513415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.651 [2024-11-19 18:24:39.513431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:39.513438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:52.651 [2024-11-19 18:24:39.517556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:52.651 [2024-11-19 18:24:39.517581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6acd70 (9): Bad file descriptor 00:25:52.651 [2024-11-19 18:24:39.542871] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:52.651 11462.50 IOPS, 44.78 MiB/s [2024-11-19T17:24:54.122Z] 11507.00 IOPS, 44.95 MiB/s [2024-11-19T17:24:54.122Z] 11797.00 IOPS, 46.08 MiB/s [2024-11-19T17:24:54.122Z] [2024-11-19 18:24:43.089130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.651 [2024-11-19 18:24:43.089465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.651 [2024-11-19 18:24:43.089471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.652 [2024-11-19 18:24:43.089724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.652 [2024-11-19 18:24:43.089735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.652 [2024-11-19 18:24:43.089747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.652 [2024-11-19 18:24:43.089759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.652 [2024-11-19 18:24:43.089770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.652 [2024-11-19 18:24:43.089783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.652 [2024-11-19 18:24:43.089794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.652 [2024-11-19 18:24:43.089806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.652 [2024-11-19 18:24:43.089813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.089991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.089997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.653 [2024-11-19 18:24:43.090268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.653 [2024-11-19 18:24:43.090273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.654 [2024-11-19 18:24:43.090285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.654 [2024-11-19 18:24:43.090297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.654 [2024-11-19 18:24:43.090308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.654 [2024-11-19 18:24:43.090320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60992 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61000 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61008 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61016 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61024 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61032 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61040 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61048 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61056 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61064 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61072 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61080 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60128 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60136 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60144 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60152 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60160 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60168 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60176 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.654 [2024-11-19 18:24:43.090855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60184 len:8 PRP1 0x0 PRP2 0x0 00:25:52.654 [2024-11-19 18:24:43.090860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.654 [2024-11-19 18:24:43.090865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.654 [2024-11-19 18:24:43.090869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.101960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60192 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.101987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60200 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60208 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60216 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60224 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60232 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60240 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60248 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61088 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60256 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60264 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60272 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60280 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60288 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60296 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60304 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60312 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60320 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60328 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60336 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60344 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60352 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60360 len:8 PRP1 0x0 PRP2 0x0 00:25:52.655 [2024-11-19 18:24:43.102499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.655 [2024-11-19 18:24:43.102505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.655 [2024-11-19 18:24:43.102509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.655 [2024-11-19 18:24:43.102514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60368 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60376 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60384 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60392 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60400 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60408 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60416 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60424 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60432 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60440 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60448 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60456 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60464 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60472 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60480 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60488 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60496 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60504 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60512 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60520 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.656 [2024-11-19 18:24:43.102947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.656 [2024-11-19 18:24:43.102952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60528 len:8 PRP1 0x0 PRP2 0x0 00:25:52.656 [2024-11-19 18:24:43.102958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.656 [2024-11-19 18:24:43.102966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.102970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.102975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60536 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.102981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.102987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.102992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.102997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60544 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60552 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60560 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60568 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60576 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60584 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60592 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60600 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60608 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60616 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60624 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60072 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60080 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60088 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60096 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60104 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.103345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.103350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60112 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.103356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.103362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.111185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.111215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60120 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.111228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.111242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.111249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.111258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60632 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.111266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.111275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.111281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.111288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60640 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.111296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.111305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.111312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.111318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60648 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.111326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.111335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.657 [2024-11-19 18:24:43.111341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.657 [2024-11-19 18:24:43.111348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60656 len:8 PRP1 0x0 PRP2 0x0 00:25:52.657 [2024-11-19 18:24:43.111356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.657 [2024-11-19 18:24:43.111365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60664 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60672 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60680 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60688 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60696 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60704 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60712 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60720 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60728 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60736 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60744 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60752 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60760 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60768 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60776 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60784 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60792 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60800 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60808 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60816 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.111976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.111982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60824 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.111991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.111999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.112005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.112012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60832 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.112020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.112029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.112034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.658 [2024-11-19 18:24:43.112041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60840 len:8 PRP1 0x0 PRP2 0x0 00:25:52.658 [2024-11-19 18:24:43.112050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.658 [2024-11-19 18:24:43.112058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.658 [2024-11-19 18:24:43.112063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60848 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60856 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60864 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60872 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60880 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60888 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60896 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60904 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60912 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60920 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60928 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60936 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60944 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60952 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60960 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60968 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60976 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60984 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.659 [2024-11-19 18:24:43.112613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.659 [2024-11-19 18:24:43.112620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60992 len:8 PRP1 0x0 PRP2 0x0 00:25:52.659 [2024-11-19 18:24:43.112627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112671] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:52.659 [2024-11-19 18:24:43.112703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.659 [2024-11-19 18:24:43.112713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.659 [2024-11-19 18:24:43.112733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.659 [2024-11-19 18:24:43.112750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.659 [2024-11-19 18:24:43.112767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.659 [2024-11-19 18:24:43.112775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:52.659 [2024-11-19 18:24:43.112807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6acd70 (9): Bad file descriptor 00:25:52.659 [2024-11-19 18:24:43.117666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:52.659 [2024-11-19 18:24:43.228557] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:52.659 11651.00 IOPS, 45.51 MiB/s [2024-11-19T17:24:54.130Z] 11854.83 IOPS, 46.31 MiB/s [2024-11-19T17:24:54.130Z] 12028.00 IOPS, 46.98 MiB/s [2024-11-19T17:24:54.130Z] 12144.12 IOPS, 47.44 MiB/s [2024-11-19T17:24:54.130Z] 12237.22 IOPS, 47.80 MiB/s [2024-11-19T17:24:54.130Z] [2024-11-19 18:24:47.463781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.660 [2024-11-19 18:24:47.463822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.660 [2024-11-19 18:24:47.463835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.660 [2024-11-19 18:24:47.463847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.660 [2024-11-19 18:24:47.463858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6acd70 is same with the state(6) to be set 00:25:52.660 [2024-11-19 18:24:47.463920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.463927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.463942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.463954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.463967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.463979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.463991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.463998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.660 [2024-11-19 18:24:47.464026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.660 [2024-11-19 18:24:47.464039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.660 [2024-11-19 18:24:47.464051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.660 [2024-11-19 18:24:47.464063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.660 [2024-11-19 18:24:47.464074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.660 [2024-11-19 18:24:47.464087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.660 [2024-11-19 18:24:47.464098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.660 [2024-11-19 18:24:47.464359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.660 [2024-11-19 18:24:47.464366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.661 [2024-11-19 18:24:47.464787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.661 [2024-11-19 18:24:47.464794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.464988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.464995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.662 [2024-11-19 18:24:47.465264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.662 [2024-11-19 18:24:47.465269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.663 [2024-11-19 18:24:47.465471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.663 [2024-11-19 18:24:47.465492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.663 [2024-11-19 18:24:47.465497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21544 len:8 PRP1 0x0 PRP2 0x0 00:25:52.663 [2024-11-19 18:24:47.465502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.663 [2024-11-19 18:24:47.465539] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:52.663 [2024-11-19 18:24:47.465547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:52.663 [2024-11-19 18:24:47.468379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:52.663 [2024-11-19 18:24:47.468401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6acd70 (9): Bad file descriptor 00:25:52.663 [2024-11-19 18:24:47.490957] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:52.663 12277.90 IOPS, 47.96 MiB/s [2024-11-19T17:24:54.134Z] 12359.00 IOPS, 48.28 MiB/s [2024-11-19T17:24:54.134Z] 12416.25 IOPS, 48.50 MiB/s [2024-11-19T17:24:54.134Z] 12464.38 IOPS, 48.69 MiB/s [2024-11-19T17:24:54.134Z] 12501.79 IOPS, 48.84 MiB/s 00:25:52.663 Latency(us) 00:25:52.663 [2024-11-19T17:24:54.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.663 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:52.663 Verification LBA range: start 0x0 length 0x4000 00:25:52.663 NVMe0n1 : 15.01 12544.38 49.00 564.93 0.00 9743.32 512.00 29272.75 00:25:52.663 [2024-11-19T17:24:54.134Z] =================================================================================================================== 00:25:52.663 [2024-11-19T17:24:54.134Z] Total : 12544.38 49.00 564.93 0.00 9743.32 512.00 29272.75 00:25:52.663 Received shutdown signal, test time was about 15.000000 seconds 00:25:52.663 00:25:52.663 Latency(us) 00:25:52.663 [2024-11-19T17:24:54.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.663 [2024-11-19T17:24:54.134Z] =================================================================================================================== 00:25:52.663 [2024-11-19T17:24:54.134Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2104216 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2104216 /var/tmp/bdevperf.sock 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2104216 ']' 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:52.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.663 18:24:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:53.234 18:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.234 18:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:53.234 18:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:53.234 [2024-11-19 18:24:54.684078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:53.494 18:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:53.494 [2024-11-19 18:24:54.868559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:53.495 18:24:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:54.065 NVMe0n1 00:25:54.065 18:24:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:54.327 00:25:54.327 18:24:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:54.897 00:25:54.897 18:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:54.897 18:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:54.897 18:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:55.156 18:24:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:58.454 18:24:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:58.454 18:24:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:58.454 18:24:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2105511 00:25:58.454 18:24:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:58.454 18:24:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2105511 00:25:59.395 { 00:25:59.395 "results": [ 00:25:59.395 { 00:25:59.395 "job": "NVMe0n1", 00:25:59.395 "core_mask": "0x1", 00:25:59.395 "workload": "verify", 00:25:59.395 "status": "finished", 00:25:59.395 "verify_range": { 00:25:59.395 "start": 0, 00:25:59.395 "length": 16384 00:25:59.395 }, 00:25:59.395 "queue_depth": 128, 00:25:59.395 "io_size": 4096, 00:25:59.395 "runtime": 1.007236, 00:25:59.395 "iops": 12908.593418027156, 00:25:59.395 "mibps": 50.42419303916858, 00:25:59.395 "io_failed": 0, 00:25:59.395 "io_timeout": 0, 00:25:59.395 "avg_latency_us": 9880.650189201662, 00:25:59.395 "min_latency_us": 2034.3466666666666, 00:25:59.395 "max_latency_us": 13598.72 00:25:59.395 } 00:25:59.395 ], 00:25:59.395 "core_count": 1 00:25:59.395 } 00:25:59.395 18:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.395 [2024-11-19 18:24:53.726943] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:25:59.395 [2024-11-19 18:24:53.727002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104216 ] 00:25:59.395 [2024-11-19 18:24:53.812039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.395 [2024-11-19 18:24:53.840240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.395 [2024-11-19 18:24:56.412774] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:59.395 [2024-11-19 18:24:56.412813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.395 [2024-11-19 18:24:56.412823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.395 [2024-11-19 18:24:56.412830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.395 [2024-11-19 18:24:56.412835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.395 [2024-11-19 18:24:56.412841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.395 [2024-11-19 18:24:56.412846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.395 [2024-11-19 18:24:56.412852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.395 [2024-11-19 18:24:56.412857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.395 [2024-11-19 18:24:56.412863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:59.395 [2024-11-19 18:24:56.412882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:59.395 [2024-11-19 18:24:56.412893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b2d70 (9): Bad file descriptor 00:25:59.395 [2024-11-19 18:24:56.434065] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:59.395 Running I/O for 1 seconds... 00:25:59.395 12874.00 IOPS, 50.29 MiB/s 00:25:59.395 Latency(us) 00:25:59.395 [2024-11-19T17:25:00.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.395 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.395 Verification LBA range: start 0x0 length 0x4000 00:25:59.395 NVMe0n1 : 1.01 12908.59 50.42 0.00 0.00 9880.65 2034.35 13598.72 00:25:59.395 [2024-11-19T17:25:00.866Z] =================================================================================================================== 00:25:59.395 [2024-11-19T17:25:00.867Z] Total : 12908.59 50.42 0.00 0.00 9880.65 2034.35 13598.72 00:25:59.396 18:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:59.396 18:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:59.656 18:25:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:59.656 18:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:59.656 18:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:59.916 18:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:00.176 18:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2104216 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2104216 ']' 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2104216 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2104216 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2104216' 00:26:03.672 killing process with pid 2104216 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2104216 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2104216 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:03.672 18:25:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:03.672 rmmod nvme_tcp 00:26:03.672 rmmod nvme_fabrics 00:26:03.672 rmmod nvme_keyring 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2100580 ']' 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2100580 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2100580 ']' 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2100580 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:03.672 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2100580 00:26:03.932 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:03.932 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:03.932 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2100580' 00:26:03.932 killing process with pid 2100580 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2100580 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2100580 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.933 18:25:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:06.477 00:26:06.477 real 0m40.486s 00:26:06.477 user 2m4.421s 00:26:06.477 sys 0m8.798s 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.477 ************************************ 00:26:06.477 END TEST nvmf_failover 00:26:06.477 ************************************ 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.477 ************************************ 00:26:06.477 START TEST nvmf_host_discovery 00:26:06.477 ************************************ 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:06.477 * Looking for test storage... 00:26:06.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:06.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.477 --rc genhtml_branch_coverage=1 00:26:06.477 --rc genhtml_function_coverage=1 00:26:06.477 --rc genhtml_legend=1 00:26:06.477 --rc geninfo_all_blocks=1 00:26:06.477 --rc geninfo_unexecuted_blocks=1 00:26:06.477 00:26:06.477 ' 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:06.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.477 --rc genhtml_branch_coverage=1 00:26:06.477 --rc genhtml_function_coverage=1 00:26:06.477 --rc genhtml_legend=1 00:26:06.477 --rc geninfo_all_blocks=1 00:26:06.477 --rc geninfo_unexecuted_blocks=1 00:26:06.477 00:26:06.477 ' 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:06.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.477 --rc genhtml_branch_coverage=1 00:26:06.477 --rc genhtml_function_coverage=1 00:26:06.477 --rc genhtml_legend=1 00:26:06.477 --rc geninfo_all_blocks=1 00:26:06.477 --rc geninfo_unexecuted_blocks=1 00:26:06.477 00:26:06.477 ' 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:06.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.477 --rc genhtml_branch_coverage=1 00:26:06.477 --rc genhtml_function_coverage=1 00:26:06.477 --rc genhtml_legend=1 00:26:06.477 --rc geninfo_all_blocks=1 00:26:06.477 --rc geninfo_unexecuted_blocks=1 00:26:06.477 00:26:06.477 ' 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.477 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:06.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:06.478 18:25:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:14.619 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:14.619 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:14.619 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:14.619 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.619 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.620 18:25:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:14.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:26:14.620 00:26:14.620 --- 10.0.0.2 ping statistics --- 00:26:14.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.620 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:26:14.620 00:26:14.620 --- 10.0.0.1 ping statistics --- 00:26:14.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.620 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2110636 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2110636 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2110636 ']' 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.620 18:25:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.620 [2024-11-19 18:25:15.280935] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:26:14.620 [2024-11-19 18:25:15.281007] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.620 [2024-11-19 18:25:15.379342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.620 [2024-11-19 18:25:15.430013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.620 [2024-11-19 18:25:15.430063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.620 [2024-11-19 18:25:15.430075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.620 [2024-11-19 18:25:15.430085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.620 [2024-11-19 18:25:15.430093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.620 [2024-11-19 18:25:15.431028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 [2024-11-19 18:25:16.141002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 [2024-11-19 18:25:16.153251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 null0 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 null1 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2110930 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2110930 /tmp/host.sock 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2110930 ']' 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:14.881 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.881 18:25:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 [2024-11-19 18:25:16.258350] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:26:14.881 [2024-11-19 18:25:16.258418] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110930 ] 00:26:15.142 [2024-11-19 18:25:16.351571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.142 [2024-11-19 18:25:16.403896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:15.714 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.975 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:15.975 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:15.975 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.975 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.975 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.975 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 [2024-11-19 18:25:17.412515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:15.976 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.238 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:16.239 18:25:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:16.811 [2024-11-19 18:25:18.129384] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:16.811 [2024-11-19 18:25:18.129414] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:16.811 [2024-11-19 18:25:18.129429] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:16.811 [2024-11-19 18:25:18.215692] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:17.072 [2024-11-19 18:25:18.318771] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:17.072 [2024-11-19 18:25:18.319992] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb63780:1 started. 00:26:17.072 [2024-11-19 18:25:18.321845] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:17.072 [2024-11-19 18:25:18.321872] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:17.072 [2024-11-19 18:25:18.328853] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb63780 was disconnected and freed. delete nvme_qpair. 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:17.334 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.596 18:25:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.858 [2024-11-19 18:25:19.070433] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb63b20:1 started. 00:26:17.858 [2024-11-19 18:25:19.080866] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb63b20 was disconnected and freed. delete nvme_qpair. 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.858 [2024-11-19 18:25:19.161566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:17.858 [2024-11-19 18:25:19.162225] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:17.858 [2024-11-19 18:25:19.162251] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:17.858 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:17.859 [2024-11-19 18:25:19.291082] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:17.859 18:25:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:18.120 [2024-11-19 18:25:19.397333] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:18.120 [2024-11-19 18:25:19.397390] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:18.120 [2024-11-19 18:25:19.397401] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:18.120 [2024-11-19 18:25:19.397407] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.064 [2024-11-19 18:25:20.433856] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:19.064 [2024-11-19 18:25:20.433880] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:19.064 [2024-11-19 18:25:20.435185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.064 [2024-11-19 18:25:20.435204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.064 [2024-11-19 18:25:20.435214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.064 [2024-11-19 18:25:20.435221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.064 [2024-11-19 18:25:20.435230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.064 [2024-11-19 18:25:20.435237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.064 [2024-11-19 18:25:20.435245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.064 [2024-11-19 18:25:20.435253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.064 [2024-11-19 18:25:20.435261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33e10 is same with the state(6) to be set 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.064 [2024-11-19 18:25:20.445196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33e10 (9): Bad file descriptor 00:26:19.064 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:19.064 [2024-11-19 18:25:20.455231] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:19.064 [2024-11-19 18:25:20.455245] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:19.064 [2024-11-19 18:25:20.455250] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:19.064 [2024-11-19 18:25:20.455260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.064 [2024-11-19 18:25:20.455279] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:19.065 [2024-11-19 18:25:20.455602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.065 [2024-11-19 18:25:20.455617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33e10 with addr=10.0.0.2, port=4420 00:26:19.065 [2024-11-19 18:25:20.455626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33e10 is same with the state(6) to be set 00:26:19.065 [2024-11-19 18:25:20.455639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33e10 (9): Bad file descriptor 00:26:19.065 [2024-11-19 18:25:20.455651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:19.065 [2024-11-19 18:25:20.455658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:19.065 [2024-11-19 18:25:20.455667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:19.065 [2024-11-19 18:25:20.455673] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:19.065 [2024-11-19 18:25:20.455679] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:19.065 [2024-11-19 18:25:20.455684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:19.065 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.065 [2024-11-19 18:25:20.465309] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:19.065 [2024-11-19 18:25:20.465321] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:19.065 [2024-11-19 18:25:20.465328] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:19.065 [2024-11-19 18:25:20.465332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.065 [2024-11-19 18:25:20.465347] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:19.065 [2024-11-19 18:25:20.465687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.065 [2024-11-19 18:25:20.465699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33e10 with addr=10.0.0.2, port=4420 00:26:19.065 [2024-11-19 18:25:20.465706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33e10 is same with the state(6) to be set 00:26:19.065 [2024-11-19 18:25:20.465718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33e10 (9): Bad file descriptor 00:26:19.065 [2024-11-19 18:25:20.465728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:19.065 [2024-11-19 18:25:20.465740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:19.065 [2024-11-19 18:25:20.465748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:19.065 [2024-11-19 18:25:20.465754] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:19.065 [2024-11-19 18:25:20.465759] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:19.065 [2024-11-19 18:25:20.465763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:19.065 [2024-11-19 18:25:20.475378] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:19.065 [2024-11-19 18:25:20.475389] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:19.065 [2024-11-19 18:25:20.475394] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:19.065 [2024-11-19 18:25:20.475398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.065 [2024-11-19 18:25:20.475412] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:19.065 [2024-11-19 18:25:20.475735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.065 [2024-11-19 18:25:20.475746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33e10 with addr=10.0.0.2, port=4420 00:26:19.065 [2024-11-19 18:25:20.475754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33e10 is same with the state(6) to be set 00:26:19.065 [2024-11-19 18:25:20.475765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33e10 (9): Bad file descriptor 00:26:19.065 [2024-11-19 18:25:20.475775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:19.065 [2024-11-19 18:25:20.475782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:19.065 [2024-11-19 18:25:20.475789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:19.065 [2024-11-19 18:25:20.475796] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:19.065 [2024-11-19 18:25:20.475800] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:19.065 [2024-11-19 18:25:20.475805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:19.065 [2024-11-19 18:25:20.485444] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:19.065 [2024-11-19 18:25:20.485457] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:19.065 [2024-11-19 18:25:20.485462] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:19.065 [2024-11-19 18:25:20.485467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.065 [2024-11-19 18:25:20.485481] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:19.065 [2024-11-19 18:25:20.485813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.065 [2024-11-19 18:25:20.485825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33e10 with addr=10.0.0.2, port=4420 00:26:19.065 [2024-11-19 18:25:20.485833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33e10 is same with the state(6) to be set 00:26:19.065 [2024-11-19 18:25:20.485844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33e10 (9): Bad file descriptor 00:26:19.065 [2024-11-19 18:25:20.485858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:19.065 [2024-11-19 18:25:20.485865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:19.065 [2024-11-19 18:25:20.485873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:19.065 [2024-11-19 18:25:20.485879] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:19.065 [2024-11-19 18:25:20.485884] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:19.065 [2024-11-19 18:25:20.485889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:19.065 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.065 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:19.065 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:19.065 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:19.065 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:19.065 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:19.065 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:19.065 [2024-11-19 18:25:20.495513] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:19.065 [2024-11-19 18:25:20.495526] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:19.065 [2024-11-19 18:25:20.495530] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:19.065 [2024-11-19 18:25:20.495535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.065 [2024-11-19 18:25:20.495549] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:19.065 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:19.065 [2024-11-19 18:25:20.495866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.065 [2024-11-19 18:25:20.495878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33e10 with addr=10.0.0.2, port=4420 00:26:19.065 [2024-11-19 18:25:20.495886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33e10 is same with the state(6) to be set 00:26:19.065 [2024-11-19 18:25:20.495897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33e10 (9): Bad file descriptor 00:26:19.065 [2024-11-19 18:25:20.495907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:19.065 [2024-11-19 18:25:20.495916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:19.065 [2024-11-19 18:25:20.495923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:19.066 [2024-11-19 18:25:20.495929] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:19.066 [2024-11-19 18:25:20.495934] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:19.066 [2024-11-19 18:25:20.495940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:19.066 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.066 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.066 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.066 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.066 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.066 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.066 [2024-11-19 18:25:20.505581] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:19.066 [2024-11-19 18:25:20.505596] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:19.066 [2024-11-19 18:25:20.505600] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:19.066 [2024-11-19 18:25:20.505605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.066 [2024-11-19 18:25:20.505620] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:19.066 [2024-11-19 18:25:20.505960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.066 [2024-11-19 18:25:20.505973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33e10 with addr=10.0.0.2, port=4420 00:26:19.066 [2024-11-19 18:25:20.505980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33e10 is same with the state(6) to be set 00:26:19.066 [2024-11-19 18:25:20.505992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33e10 (9): Bad file descriptor 00:26:19.066 [2024-11-19 18:25:20.506003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:19.066 [2024-11-19 18:25:20.506010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:19.066 [2024-11-19 18:25:20.506017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:19.066 [2024-11-19 18:25:20.506023] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:19.066 [2024-11-19 18:25:20.506028] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:19.066 [2024-11-19 18:25:20.506032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:19.066 [2024-11-19 18:25:20.515651] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:19.066 [2024-11-19 18:25:20.515664] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:19.066 [2024-11-19 18:25:20.515669] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:19.066 [2024-11-19 18:25:20.515673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.066 [2024-11-19 18:25:20.515688] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:19.066 [2024-11-19 18:25:20.515984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.066 [2024-11-19 18:25:20.515997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb33e10 with addr=10.0.0.2, port=4420 00:26:19.066 [2024-11-19 18:25:20.516004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb33e10 is same with the state(6) to be set 00:26:19.066 [2024-11-19 18:25:20.516015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33e10 (9): Bad file descriptor 00:26:19.066 [2024-11-19 18:25:20.516026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:19.066 [2024-11-19 18:25:20.516032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:19.066 [2024-11-19 18:25:20.516043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:19.066 [2024-11-19 18:25:20.516049] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:19.066 [2024-11-19 18:25:20.516054] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:19.066 [2024-11-19 18:25:20.516059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:19.066 [2024-11-19 18:25:20.522132] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:19.066 [2024-11-19 18:25:20.522151] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:19.328 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.329 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.590 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:19.590 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:19.590 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:19.590 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:19.590 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.590 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.590 18:25:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.532 [2024-11-19 18:25:21.880332] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:20.532 [2024-11-19 18:25:21.880346] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:20.532 [2024-11-19 18:25:21.880355] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:20.532 [2024-11-19 18:25:21.968615] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:21.103 [2024-11-19 18:25:22.278025] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:21.103 [2024-11-19 18:25:22.278716] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xb45050:1 started. 00:26:21.103 [2024-11-19 18:25:22.280090] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:21.103 [2024-11-19 18:25:22.280111] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:21.103 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.103 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:21.103 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:21.103 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:21.103 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:21.103 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.104 [2024-11-19 18:25:22.288478] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xb45050 was disconnected and freed. delete nvme_qpair. 00:26:21.104 request: 00:26:21.104 { 00:26:21.104 "name": "nvme", 00:26:21.104 "trtype": "tcp", 00:26:21.104 "traddr": "10.0.0.2", 00:26:21.104 "adrfam": "ipv4", 00:26:21.104 "trsvcid": "8009", 00:26:21.104 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:21.104 "wait_for_attach": true, 00:26:21.104 "method": "bdev_nvme_start_discovery", 00:26:21.104 "req_id": 1 00:26:21.104 } 00:26:21.104 Got JSON-RPC error response 00:26:21.104 response: 00:26:21.104 { 00:26:21.104 "code": -17, 00:26:21.104 "message": "File exists" 00:26:21.104 } 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.104 request: 00:26:21.104 { 00:26:21.104 "name": "nvme_second", 00:26:21.104 "trtype": "tcp", 00:26:21.104 "traddr": "10.0.0.2", 00:26:21.104 "adrfam": "ipv4", 00:26:21.104 "trsvcid": "8009", 00:26:21.104 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:21.104 "wait_for_attach": true, 00:26:21.104 "method": "bdev_nvme_start_discovery", 00:26:21.104 "req_id": 1 00:26:21.104 } 00:26:21.104 Got JSON-RPC error response 00:26:21.104 response: 00:26:21.104 { 00:26:21.104 "code": -17, 00:26:21.104 "message": "File exists" 00:26:21.104 } 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.104 18:25:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.488 [2024-11-19 18:25:23.535772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.488 [2024-11-19 18:25:23.535795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb45af0 with addr=10.0.0.2, port=8010 00:26:22.488 [2024-11-19 18:25:23.535805] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:22.488 [2024-11-19 18:25:23.535810] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:22.488 [2024-11-19 18:25:23.535816] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:23.430 [2024-11-19 18:25:24.538151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.430 [2024-11-19 18:25:24.538173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb45af0 with addr=10.0.0.2, port=8010 00:26:23.430 [2024-11-19 18:25:24.538183] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:23.430 [2024-11-19 18:25:24.538188] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:23.430 [2024-11-19 18:25:24.538193] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:24.371 [2024-11-19 18:25:25.540157] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:24.371 request: 00:26:24.371 { 00:26:24.371 "name": "nvme_second", 00:26:24.371 "trtype": "tcp", 00:26:24.371 "traddr": "10.0.0.2", 00:26:24.371 "adrfam": "ipv4", 00:26:24.371 "trsvcid": "8010", 00:26:24.371 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:24.371 "wait_for_attach": false, 00:26:24.371 "attach_timeout_ms": 3000, 00:26:24.371 "method": "bdev_nvme_start_discovery", 00:26:24.371 "req_id": 1 00:26:24.371 } 00:26:24.371 Got JSON-RPC error response 00:26:24.371 response: 00:26:24.371 { 00:26:24.371 "code": -110, 00:26:24.371 "message": "Connection timed out" 00:26:24.371 } 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:24.371 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2110930 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.372 rmmod nvme_tcp 00:26:24.372 rmmod nvme_fabrics 00:26:24.372 rmmod nvme_keyring 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2110636 ']' 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2110636 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2110636 ']' 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2110636 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2110636 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2110636' 00:26:24.372 killing process with pid 2110636 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2110636 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2110636 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.372 18:25:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.917 18:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:26.917 00:26:26.917 real 0m20.457s 00:26:26.917 user 0m23.743s 00:26:26.917 sys 0m7.344s 00:26:26.917 18:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.917 18:25:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.917 ************************************ 00:26:26.917 END TEST nvmf_host_discovery 00:26:26.917 ************************************ 00:26:26.917 18:25:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:26.917 18:25:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:26.917 18:25:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.917 18:25:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.917 ************************************ 00:26:26.917 START TEST nvmf_host_multipath_status 00:26:26.917 ************************************ 00:26:26.917 18:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:26.917 * Looking for test storage... 00:26:26.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:26.917 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:26.917 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:26.917 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:26.917 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:26.917 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:26.917 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:26.917 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:26.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.918 --rc genhtml_branch_coverage=1 00:26:26.918 --rc genhtml_function_coverage=1 00:26:26.918 --rc genhtml_legend=1 00:26:26.918 --rc geninfo_all_blocks=1 00:26:26.918 --rc geninfo_unexecuted_blocks=1 00:26:26.918 00:26:26.918 ' 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:26.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.918 --rc genhtml_branch_coverage=1 00:26:26.918 --rc genhtml_function_coverage=1 00:26:26.918 --rc genhtml_legend=1 00:26:26.918 --rc geninfo_all_blocks=1 00:26:26.918 --rc geninfo_unexecuted_blocks=1 00:26:26.918 00:26:26.918 ' 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:26.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.918 --rc genhtml_branch_coverage=1 00:26:26.918 --rc genhtml_function_coverage=1 00:26:26.918 --rc genhtml_legend=1 00:26:26.918 --rc geninfo_all_blocks=1 00:26:26.918 --rc geninfo_unexecuted_blocks=1 00:26:26.918 00:26:26.918 ' 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:26.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.918 --rc genhtml_branch_coverage=1 00:26:26.918 --rc genhtml_function_coverage=1 00:26:26.918 --rc genhtml_legend=1 00:26:26.918 --rc geninfo_all_blocks=1 00:26:26.918 --rc geninfo_unexecuted_blocks=1 00:26:26.918 00:26:26.918 ' 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.918 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:26.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:26.919 18:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.068 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:35.069 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:35.069 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:35.069 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:35.069 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:35.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:26:35.069 00:26:35.069 --- 10.0.0.2 ping statistics --- 00:26:35.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.069 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:26:35.069 00:26:35.069 --- 10.0.0.1 ping statistics --- 00:26:35.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.069 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2117107 00:26:35.069 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2117107 00:26:35.070 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:35.070 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2117107 ']' 00:26:35.070 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.070 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.070 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.070 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.070 18:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:35.070 [2024-11-19 18:25:35.775877] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:26:35.070 [2024-11-19 18:25:35.775980] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.070 [2024-11-19 18:25:35.879819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:35.070 [2024-11-19 18:25:35.930823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.070 [2024-11-19 18:25:35.930874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.070 [2024-11-19 18:25:35.930883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.070 [2024-11-19 18:25:35.930890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.070 [2024-11-19 18:25:35.930896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.070 [2024-11-19 18:25:35.932664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.070 [2024-11-19 18:25:35.932669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.331 18:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.331 18:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:35.331 18:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.331 18:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.331 18:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:35.331 18:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.331 18:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2117107 00:26:35.331 18:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:35.331 [2024-11-19 18:25:36.795647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.592 18:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:35.592 Malloc0 00:26:35.592 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:35.853 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.125 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.386 [2024-11-19 18:25:37.627463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.386 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:36.386 [2024-11-19 18:25:37.824046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2117470 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2117470 /var/tmp/bdevperf.sock 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2117470 ']' 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:36.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.647 18:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:37.589 18:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.589 18:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:37.589 18:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:37.589 18:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:38.159 Nvme0n1 00:26:38.159 18:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:38.418 Nvme0n1 00:26:38.418 18:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:38.418 18:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:40.331 18:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:40.331 18:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:40.591 18:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:40.851 18:25:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:41.792 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:41.792 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:41.792 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.792 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.053 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.053 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:42.053 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.053 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.053 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.053 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.053 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.053 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.313 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.313 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.313 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.313 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.575 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.575 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:42.575 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.575 18:25:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.575 18:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.575 18:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:42.836 18:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.836 18:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:42.836 18:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.836 18:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:42.836 18:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:43.098 18:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:43.098 18:25:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.484 18:25:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:44.746 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.746 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:44.746 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.746 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.006 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.007 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.007 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.007 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.266 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.266 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:45.267 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.267 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.267 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.267 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:45.267 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:45.526 18:25:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:45.787 18:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:46.728 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:46.728 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:46.728 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.728 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:46.987 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.987 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:46.988 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.988 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:46.988 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:46.988 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:46.988 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.988 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:47.248 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.248 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:47.248 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.248 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:47.507 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.507 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:47.507 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.507 18:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:47.766 18:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.766 18:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:47.766 18:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.766 18:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:47.766 18:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.766 18:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:47.767 18:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:48.026 18:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:48.286 18:25:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:49.229 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:49.229 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:49.229 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.229 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:49.490 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.490 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:49.490 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:49.490 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.490 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:49.490 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:49.490 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.490 18:25:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:49.751 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.751 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:49.751 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.751 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.011 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.012 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:50.012 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.012 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:50.012 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.012 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:50.012 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.012 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:50.279 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.280 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:50.280 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:50.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:50.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:51.926 18:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:51.926 18:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:51.926 18:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.926 18:25:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:51.926 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:51.926 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:51.926 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.926 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:51.926 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:51.926 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:51.926 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.926 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:52.217 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.217 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:52.217 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:52.217 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.478 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.478 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:52.478 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.478 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:52.478 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:52.478 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:52.478 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.478 18:25:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:52.738 18:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:52.738 18:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:52.738 18:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:52.999 18:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:52.999 18:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:54.045 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:54.045 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:54.045 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.045 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:54.305 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:54.305 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:54.305 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:54.305 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.305 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.305 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:54.565 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.565 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:54.565 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.565 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:54.565 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.565 18:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:54.825 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.825 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:54.825 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.825 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:55.085 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.085 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:55.085 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.085 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:55.085 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.085 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:55.346 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:55.346 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:55.607 18:25:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:55.607 18:25:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.991 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:57.252 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.252 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:57.252 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.252 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:57.512 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.512 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:57.512 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.512 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:57.512 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.512 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:57.512 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.512 18:25:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:57.773 18:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.773 18:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:57.773 18:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:58.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:58.294 18:25:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:59.239 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:59.239 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:59.239 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.239 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:59.500 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.500 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:59.500 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:59.500 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.500 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.500 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:59.500 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.500 18:26:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:59.761 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.761 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:59.762 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.762 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:00.022 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.022 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:00.022 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.022 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:00.022 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.022 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:00.022 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.023 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:00.284 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.284 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:00.284 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:00.546 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:00.546 18:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:01.929 18:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:01.929 18:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:01.929 18:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.929 18:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:01.929 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.929 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:01.929 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.929 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:01.929 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.929 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:01.929 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.929 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:02.190 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.190 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:02.190 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.190 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:02.451 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.451 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:02.451 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.451 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:02.451 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.451 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:02.451 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.451 18:26:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:02.712 18:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.712 18:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:02.712 18:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:02.973 18:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:03.234 18:26:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:04.178 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:04.178 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:04.178 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.178 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:04.178 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.178 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:04.178 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.178 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:04.440 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:04.440 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:04.440 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.440 18:26:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:04.701 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.701 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:04.701 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.701 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:04.961 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.961 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:04.961 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.961 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:04.961 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.961 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:04.961 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.961 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:05.222 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.222 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2117470 00:27:05.222 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2117470 ']' 00:27:05.223 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2117470 00:27:05.223 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:05.223 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.223 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117470 00:27:05.223 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:05.223 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:05.223 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117470' 00:27:05.223 killing process with pid 2117470 00:27:05.223 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2117470 00:27:05.223 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2117470 00:27:05.223 { 00:27:05.223 "results": [ 00:27:05.223 { 00:27:05.223 "job": "Nvme0n1", 00:27:05.223 "core_mask": "0x4", 00:27:05.223 "workload": "verify", 00:27:05.223 "status": "terminated", 00:27:05.223 "verify_range": { 00:27:05.223 "start": 0, 00:27:05.223 "length": 16384 00:27:05.223 }, 00:27:05.223 "queue_depth": 128, 00:27:05.223 "io_size": 4096, 00:27:05.223 "runtime": 26.749133, 00:27:05.223 "iops": 12043.306226037308, 00:27:05.223 "mibps": 47.044164945458235, 00:27:05.223 "io_failed": 0, 00:27:05.223 "io_timeout": 0, 00:27:05.223 "avg_latency_us": 10610.67202403864, 00:27:05.223 "min_latency_us": 856.7466666666667, 00:27:05.223 "max_latency_us": 3075822.933333333 00:27:05.223 } 00:27:05.223 ], 00:27:05.223 "core_count": 1 00:27:05.223 } 00:27:05.512 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2117470 00:27:05.512 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:05.512 [2024-11-19 18:25:37.904569] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:27:05.512 [2024-11-19 18:25:37.904649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117470 ] 00:27:05.512 [2024-11-19 18:25:37.999050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.512 [2024-11-19 18:25:38.049186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.512 Running I/O for 90 seconds... 00:27:05.512 9956.00 IOPS, 38.89 MiB/s [2024-11-19T17:26:06.983Z] 10967.00 IOPS, 42.84 MiB/s [2024-11-19T17:26:06.983Z] 11257.33 IOPS, 43.97 MiB/s [2024-11-19T17:26:06.983Z] 11690.50 IOPS, 45.67 MiB/s [2024-11-19T17:26:06.983Z] 11990.00 IOPS, 46.84 MiB/s [2024-11-19T17:26:06.983Z] 12135.17 IOPS, 47.40 MiB/s [2024-11-19T17:26:06.983Z] 12247.86 IOPS, 47.84 MiB/s [2024-11-19T17:26:06.983Z] 12348.50 IOPS, 48.24 MiB/s [2024-11-19T17:26:06.983Z] 12411.89 IOPS, 48.48 MiB/s [2024-11-19T17:26:06.983Z] 12483.30 IOPS, 48.76 MiB/s [2024-11-19T17:26:06.983Z] 12525.91 IOPS, 48.93 MiB/s [2024-11-19T17:26:06.983Z] [2024-11-19 18:25:51.747884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.512 [2024-11-19 18:25:51.747920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.747939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.747945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.747956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.747962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.747973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.747978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.747989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.747994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.512 [2024-11-19 18:25:51.748366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.512 [2024-11-19 18:25:51.748376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.748675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.748680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.513 [2024-11-19 18:25:51.749275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.513 [2024-11-19 18:25:51.749280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.514 [2024-11-19 18:25:51.749441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.514 [2024-11-19 18:25:51.749457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.514 [2024-11-19 18:25:51.749472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.514 [2024-11-19 18:25:51.749488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.514 [2024-11-19 18:25:51.749504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.514 [2024-11-19 18:25:51.749519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.514 [2024-11-19 18:25:51.749535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.514 [2024-11-19 18:25:51.749550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.514 [2024-11-19 18:25:51.749764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.514 [2024-11-19 18:25:51.749769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.749780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.749785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.749795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.749802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.515 [2024-11-19 18:25:51.750567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.515 [2024-11-19 18:25:51.750653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.515 [2024-11-19 18:25:51.750659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.750669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.516 [2024-11-19 18:25:51.750674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.750684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.750689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.516 [2024-11-19 18:25:51.751905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.516 [2024-11-19 18:25:51.751910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.751920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.751925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.751935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.751940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.751951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.751956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.751966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.751971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.751981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.751987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.751997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.517 [2024-11-19 18:25:51.752318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.517 [2024-11-19 18:25:51.752328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.518 [2024-11-19 18:25:51.752583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.518 [2024-11-19 18:25:51.752598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.518 [2024-11-19 18:25:51.752614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.518 [2024-11-19 18:25:51.752629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.518 [2024-11-19 18:25:51.752645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.518 [2024-11-19 18:25:51.752662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.518 [2024-11-19 18:25:51.752677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.518 [2024-11-19 18:25:51.752692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.752718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.752723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.753185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.753195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.753207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.753212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.753222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.753228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.753238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.753244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.753254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.753259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.753269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.518 [2024-11-19 18:25:51.753275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.518 [2024-11-19 18:25:51.753285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.519 [2024-11-19 18:25:51.753541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.519 [2024-11-19 18:25:51.753788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.519 [2024-11-19 18:25:51.753798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.520 [2024-11-19 18:25:51.753803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.753818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.753833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.753849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.753864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.753882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.753898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.520 [2024-11-19 18:25:51.753913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.753928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.753943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.753954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.753959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.520 [2024-11-19 18:25:51.754613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.520 [2024-11-19 18:25:51.754624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.754630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.754640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.754645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.754655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.754661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.754671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.754676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.754686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.754691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.759985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.759999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.760005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.760015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.760021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.760031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.760037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.521 [2024-11-19 18:25:51.760047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.521 [2024-11-19 18:25:51.760053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-19 18:25:51.760311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-19 18:25:51.760327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-19 18:25:51.760343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-19 18:25:51.760358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-19 18:25:51.760374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-19 18:25:51.760389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-19 18:25:51.760405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.522 [2024-11-19 18:25:51.760420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.522 [2024-11-19 18:25:51.760478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.522 [2024-11-19 18:25:51.760483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.523 [2024-11-19 18:25:51.760813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.523 [2024-11-19 18:25:51.760965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-11-19 18:25:51.760971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.760981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-11-19 18:25:51.760987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.760997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-11-19 18:25:51.761002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-11-19 18:25:51.761018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-11-19 18:25:51.761033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-11-19 18:25:51.761049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-11-19 18:25:51.761064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-11-19 18:25:51.761081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-11-19 18:25:51.761194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.761987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.761997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.762002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.762013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.762018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.762028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.762034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.762045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.762050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.762060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.524 [2024-11-19 18:25:51.762065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.524 [2024-11-19 18:25:51.762075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.525 [2024-11-19 18:25:51.762912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.525 [2024-11-19 18:25:51.762920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.762931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.762936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.762946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.762951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.762965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.762970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.762981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.762986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.762996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.526 [2024-11-19 18:25:51.763253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.526 [2024-11-19 18:25:51.763268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.526 [2024-11-19 18:25:51.763284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.526 [2024-11-19 18:25:51.763299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.526 [2024-11-19 18:25:51.763314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.526 [2024-11-19 18:25:51.763329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.526 [2024-11-19 18:25:51.763346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.526 [2024-11-19 18:25:51.763362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.526 [2024-11-19 18:25:51.763509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.526 [2024-11-19 18:25:51.763514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.763525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.763530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.763541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.763546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.763557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.763562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.763572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.763577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.763587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.763592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.763602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.763607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.763617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.763622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.763633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.763638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.763991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.763999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.764015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.764031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.764047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.764062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.764079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.764095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.527 [2024-11-19 18:25:51.764362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.527 [2024-11-19 18:25:51.764372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.527 [2024-11-19 18:25:51.764377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.528 [2024-11-19 18:25:51.764471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.764882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.764888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.528 [2024-11-19 18:25:51.769703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.528 [2024-11-19 18:25:51.769714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.769720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.769731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.769736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.769748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.769753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.529 [2024-11-19 18:25:51.770696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.529 [2024-11-19 18:25:51.770702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.770719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.770737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.770754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.770771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.770788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.770805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.770822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.770838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.770855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.770872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.770889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.770905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.770922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.770940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.770957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.770974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.770985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.770990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.530 [2024-11-19 18:25:51.771265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.771282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.771298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.530 [2024-11-19 18:25:51.771315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.530 [2024-11-19 18:25:51.771327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.771574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.771591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.771607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.771624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.771641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.771658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.531 [2024-11-19 18:25:51.771675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.771691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.771703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.771708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.531 [2024-11-19 18:25:51.772682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.531 [2024-11-19 18:25:51.772693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.772982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.772988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.532 [2024-11-19 18:25:51.773509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.532 [2024-11-19 18:25:51.773515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.773982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.773989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.533 [2024-11-19 18:25:51.774074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.533 [2024-11-19 18:25:51.774090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.533 [2024-11-19 18:25:51.774107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.533 [2024-11-19 18:25:51.774124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.533 [2024-11-19 18:25:51.774141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.533 [2024-11-19 18:25:51.774162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.533 [2024-11-19 18:25:51.774179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.533 [2024-11-19 18:25:51.774197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.533 [2024-11-19 18:25:51.774317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.533 [2024-11-19 18:25:51.774328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.774986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.774992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.775011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.534 [2024-11-19 18:25:51.775307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.775324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.775341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.775358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.534 [2024-11-19 18:25:51.775374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.534 [2024-11-19 18:25:51.775386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.535 [2024-11-19 18:25:51.775425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.775680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.775686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.535 [2024-11-19 18:25:51.780912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.535 [2024-11-19 18:25:51.780918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.780930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.780941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.780953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.780959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.780971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.780977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.780989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.780995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.536 [2024-11-19 18:25:51.781527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.536 [2024-11-19 18:25:51.781539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.781581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.781599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.781620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.781639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.781658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.781677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.781696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.781714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.781985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.781997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.537 [2024-11-19 18:25:51.782825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.782843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.782861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.782879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.537 [2024-11-19 18:25:51.782891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.537 [2024-11-19 18:25:51.782897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.782909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.782915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.782927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.782933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.782947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.782953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.782965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.782971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.782983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.782989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.783007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.783025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.783043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.783061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.783080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.783098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.783116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.783134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.538 [2024-11-19 18:25:51.783266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.783588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.538 [2024-11-19 18:25:51.783594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.538 [2024-11-19 18:25:51.784007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.784986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.784998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.539 [2024-11-19 18:25:51.785006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.539 [2024-11-19 18:25:51.785018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.540 [2024-11-19 18:25:51.785300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.540 [2024-11-19 18:25:51.785318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.540 [2024-11-19 18:25:51.785336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.540 [2024-11-19 18:25:51.785354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.540 [2024-11-19 18:25:51.785372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.540 [2024-11-19 18:25:51.785390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.540 [2024-11-19 18:25:51.785408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.540 [2024-11-19 18:25:51.785426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.540 [2024-11-19 18:25:51.785718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.540 [2024-11-19 18:25:51.785730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.785736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.785749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.785755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.541 [2024-11-19 18:25:51.786681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.541 [2024-11-19 18:25:51.786699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.541 [2024-11-19 18:25:51.786711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.786982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.786988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.542 [2024-11-19 18:25:51.787937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.542 [2024-11-19 18:25:51.787943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.787955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.787961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.787972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.787979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.787990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.787996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.788375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.788381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.794466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.794484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.794500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.794516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.794532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.794548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.794563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.543 [2024-11-19 18:25:51.794579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.543 [2024-11-19 18:25:51.794595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.543 [2024-11-19 18:25:51.794610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.543 [2024-11-19 18:25:51.794626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.543 [2024-11-19 18:25:51.794645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.543 [2024-11-19 18:25:51.794661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.543 [2024-11-19 18:25:51.794677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.543 [2024-11-19 18:25:51.794687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.794692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.794708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.794970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.794975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.795506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.795524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.795540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.795559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.795575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.795590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.795605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.544 [2024-11-19 18:25:51.795621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.544 [2024-11-19 18:25:51.795805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.544 [2024-11-19 18:25:51.795810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.545 [2024-11-19 18:25:51.795826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.545 [2024-11-19 18:25:51.795841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.545 [2024-11-19 18:25:51.795857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.545 [2024-11-19 18:25:51.795872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.545 [2024-11-19 18:25:51.795888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.795903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.795919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.795936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.795952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.795967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.795983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.795993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.545 [2024-11-19 18:25:51.795999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.545 [2024-11-19 18:25:51.796353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.545 [2024-11-19 18:25:51.796363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.796537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.796542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.546 [2024-11-19 18:25:51.797436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.546 [2024-11-19 18:25:51.797446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.547 [2024-11-19 18:25:51.797624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.547 [2024-11-19 18:25:51.797640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.547 [2024-11-19 18:25:51.797656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.547 [2024-11-19 18:25:51.797671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.547 [2024-11-19 18:25:51.797687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.547 [2024-11-19 18:25:51.797703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.547 [2024-11-19 18:25:51.797718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.547 [2024-11-19 18:25:51.797733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.797984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.797990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.798445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.798454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.547 [2024-11-19 18:25:51.798466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.547 [2024-11-19 18:25:51.798472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.798945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.798956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.548 [2024-11-19 18:25:51.798961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.799295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.799304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.799315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.799321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.799333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.799339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.799349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.799355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.799365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.799370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.548 [2024-11-19 18:25:51.799381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.548 [2024-11-19 18:25:51.799386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.799714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.799724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.804985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.804998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.805006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.805019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.805026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.805039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.805046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.805060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.549 [2024-11-19 18:25:51.805066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:05.549 [2024-11-19 18:25:51.805080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.550 [2024-11-19 18:25:51.805626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.550 [2024-11-19 18:25:51.805646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.550 [2024-11-19 18:25:51.805667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.550 [2024-11-19 18:25:51.805687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.550 [2024-11-19 18:25:51.805709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.550 [2024-11-19 18:25:51.805733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.550 [2024-11-19 18:25:51.805753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.550 [2024-11-19 18:25:51.805767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.550 [2024-11-19 18:25:51.805774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.805794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.805836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.805857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.805877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.805898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.805918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.805939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.805959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.805980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.805995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.551 [2024-11-19 18:25:51.806313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.551 [2024-11-19 18:25:51.806540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.551 [2024-11-19 18:25:51.806546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.552 [2024-11-19 18:25:51.806567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.552 [2024-11-19 18:25:51.806587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.552 [2024-11-19 18:25:51.806607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.552 [2024-11-19 18:25:51.806627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.552 [2024-11-19 18:25:51.806648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.552 [2024-11-19 18:25:51.806669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.806689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.806709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.806729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.806751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.806771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.806785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.806792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.552 [2024-11-19 18:25:51.807042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:05.552 [2024-11-19 18:25:51.807718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.552 [2024-11-19 18:25:51.807725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:25:51.807744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:25:51.807751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:25:51.807770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:25:51.807777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:25:51.807796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:25:51.807804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:25:51.807823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:25:51.807829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:25:51.807848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:25:51.807855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:25:51.807874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:25:51.807881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:25:51.807900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:25:51.807907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:25:51.807926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:25:51.807933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:25:51.808024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:25:51.808035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:05.553 12452.42 IOPS, 48.64 MiB/s [2024-11-19T17:26:07.024Z] 11494.54 IOPS, 44.90 MiB/s [2024-11-19T17:26:07.024Z] 10673.50 IOPS, 41.69 MiB/s [2024-11-19T17:26:07.024Z] 10000.40 IOPS, 39.06 MiB/s [2024-11-19T17:26:07.024Z] 10188.12 IOPS, 39.80 MiB/s [2024-11-19T17:26:07.024Z] 10350.12 IOPS, 40.43 MiB/s [2024-11-19T17:26:07.024Z] 10735.67 IOPS, 41.94 MiB/s [2024-11-19T17:26:07.024Z] 11055.68 IOPS, 43.19 MiB/s [2024-11-19T17:26:07.024Z] 11245.30 IOPS, 43.93 MiB/s [2024-11-19T17:26:07.024Z] 11329.86 IOPS, 44.26 MiB/s [2024-11-19T17:26:07.024Z] 11396.95 IOPS, 44.52 MiB/s [2024-11-19T17:26:07.024Z] 11621.52 IOPS, 45.40 MiB/s [2024-11-19T17:26:07.024Z] 11838.25 IOPS, 46.24 MiB/s [2024-11-19T17:26:07.024Z] [2024-11-19 18:26:04.418685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-11-19 18:26:04.418722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.418753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-11-19 18:26:04.418759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.418771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.418777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.418787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.418792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.418803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.418809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.418819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.418824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.418835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.418840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.418851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.418857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.418867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-11-19 18:26:04.418872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.553 [2024-11-19 18:26:04.419733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.553 [2024-11-19 18:26:04.419828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:05.553 [2024-11-19 18:26:04.419839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.419844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.419860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.419876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.419891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.419907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.554 [2024-11-19 18:26:04.419922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.554 [2024-11-19 18:26:04.419938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.554 [2024-11-19 18:26:04.419955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.554 [2024-11-19 18:26:04.419971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.419986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.419997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.420002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.420012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.420018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.420028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.420033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.420044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.420049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.420059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.420065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.420075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.420081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.420967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.420980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.420993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.420998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.554 [2024-11-19 18:26:04.421125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.554 [2024-11-19 18:26:04.421241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.554 [2024-11-19 18:26:04.421257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.554 [2024-11-19 18:26:04.421272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.554 [2024-11-19 18:26:04.421287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:05.554 [2024-11-19 18:26:04.421298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:05.554 [2024-11-19 18:26:04.421303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:05.554 11973.80 IOPS, 46.77 MiB/s [2024-11-19T17:26:07.025Z] 12013.62 IOPS, 46.93 MiB/s [2024-11-19T17:26:07.025Z] Received shutdown signal, test time was about 26.749745 seconds 00:27:05.554 00:27:05.554 Latency(us) 00:27:05.554 [2024-11-19T17:26:07.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.554 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:05.554 Verification LBA range: start 0x0 length 0x4000 00:27:05.555 Nvme0n1 : 26.75 12043.31 47.04 0.00 0.00 10610.67 856.75 3075822.93 00:27:05.555 [2024-11-19T17:26:07.026Z] =================================================================================================================== 00:27:05.555 [2024-11-19T17:26:07.026Z] Total : 12043.31 47.04 0.00 0.00 10610.67 856.75 3075822.93 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.555 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.555 rmmod nvme_tcp 00:27:05.555 rmmod nvme_fabrics 00:27:05.555 rmmod nvme_keyring 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2117107 ']' 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2117107 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2117107 ']' 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2117107 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.817 18:26:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117107 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117107' 00:27:05.817 killing process with pid 2117107 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2117107 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2117107 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.817 18:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:08.364 00:27:08.364 real 0m41.250s 00:27:08.364 user 1m46.720s 00:27:08.364 sys 0m11.444s 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:08.364 ************************************ 00:27:08.364 END TEST nvmf_host_multipath_status 00:27:08.364 ************************************ 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.364 ************************************ 00:27:08.364 START TEST nvmf_discovery_remove_ifc 00:27:08.364 ************************************ 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:08.364 * Looking for test storage... 00:27:08.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:08.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.364 --rc genhtml_branch_coverage=1 00:27:08.364 --rc genhtml_function_coverage=1 00:27:08.364 --rc genhtml_legend=1 00:27:08.364 --rc geninfo_all_blocks=1 00:27:08.364 --rc geninfo_unexecuted_blocks=1 00:27:08.364 00:27:08.364 ' 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:08.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.364 --rc genhtml_branch_coverage=1 00:27:08.364 --rc genhtml_function_coverage=1 00:27:08.364 --rc genhtml_legend=1 00:27:08.364 --rc geninfo_all_blocks=1 00:27:08.364 --rc geninfo_unexecuted_blocks=1 00:27:08.364 00:27:08.364 ' 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:08.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.364 --rc genhtml_branch_coverage=1 00:27:08.364 --rc genhtml_function_coverage=1 00:27:08.364 --rc genhtml_legend=1 00:27:08.364 --rc geninfo_all_blocks=1 00:27:08.364 --rc geninfo_unexecuted_blocks=1 00:27:08.364 00:27:08.364 ' 00:27:08.364 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:08.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.364 --rc genhtml_branch_coverage=1 00:27:08.364 --rc genhtml_function_coverage=1 00:27:08.364 --rc genhtml_legend=1 00:27:08.364 --rc geninfo_all_blocks=1 00:27:08.364 --rc geninfo_unexecuted_blocks=1 00:27:08.364 00:27:08.364 ' 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:08.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:08.365 18:26:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:16.510 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:16.510 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:16.510 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:16.510 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.510 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:27:16.511 00:27:16.511 --- 10.0.0.2 ping statistics --- 00:27:16.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.511 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:27:16.511 00:27:16.511 --- 10.0.0.1 ping statistics --- 00:27:16.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.511 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2127358 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2127358 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2127358 ']' 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.511 18:26:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.511 [2024-11-19 18:26:17.049402] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:27:16.511 [2024-11-19 18:26:17.049468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.511 [2024-11-19 18:26:17.148860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.511 [2024-11-19 18:26:17.198427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.511 [2024-11-19 18:26:17.198475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.511 [2024-11-19 18:26:17.198487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.511 [2024-11-19 18:26:17.198496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.511 [2024-11-19 18:26:17.198505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.511 [2024-11-19 18:26:17.199377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.511 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.511 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:16.511 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:16.511 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.511 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.511 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.511 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:16.511 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.511 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.511 [2024-11-19 18:26:17.920920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.511 [2024-11-19 18:26:17.929157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:16.511 null0 00:27:16.511 [2024-11-19 18:26:17.961137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2127582 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2127582 /tmp/host.sock 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2127582 ']' 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:16.773 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.773 18:26:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.773 [2024-11-19 18:26:18.038809] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:27:16.773 [2024-11-19 18:26:18.038872] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127582 ] 00:27:16.773 [2024-11-19 18:26:18.131354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.773 [2024-11-19 18:26:18.184649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.716 18:26:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.658 [2024-11-19 18:26:20.012286] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:18.658 [2024-11-19 18:26:20.012319] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:18.658 [2024-11-19 18:26:20.012342] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:18.658 [2024-11-19 18:26:20.100612] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:18.952 [2024-11-19 18:26:20.281027] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:18.952 [2024-11-19 18:26:20.282304] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x19563f0:1 started. 00:27:18.952 [2024-11-19 18:26:20.284109] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:18.952 [2024-11-19 18:26:20.284185] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:18.952 [2024-11-19 18:26:20.284209] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:18.952 [2024-11-19 18:26:20.284228] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:18.952 [2024-11-19 18:26:20.284254] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:18.952 [2024-11-19 18:26:20.289498] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x19563f0 was disconnected and freed. delete nvme_qpair. 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:18.952 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:19.255 18:26:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:20.260 18:26:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:21.210 18:26:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:22.597 18:26:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:23.543 18:26:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:24.485 [2024-11-19 18:26:25.724241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:24.485 [2024-11-19 18:26:25.724276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.485 [2024-11-19 18:26:25.724285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.485 [2024-11-19 18:26:25.724292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.485 [2024-11-19 18:26:25.724297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.485 [2024-11-19 18:26:25.724303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.485 [2024-11-19 18:26:25.724308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.485 [2024-11-19 18:26:25.724314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.485 [2024-11-19 18:26:25.724319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.485 [2024-11-19 18:26:25.724325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.485 [2024-11-19 18:26:25.724330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.485 [2024-11-19 18:26:25.724335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1932c00 is same with the state(6) to be set 00:27:24.485 [2024-11-19 18:26:25.734264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1932c00 (9): Bad file descriptor 00:27:24.485 18:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.485 18:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.485 18:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.485 18:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.485 18:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.485 18:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.485 18:26:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.485 [2024-11-19 18:26:25.744298] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:24.485 [2024-11-19 18:26:25.744307] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:24.485 [2024-11-19 18:26:25.744311] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:24.485 [2024-11-19 18:26:25.744315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:24.485 [2024-11-19 18:26:25.744330] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:25.428 [2024-11-19 18:26:26.779246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:25.428 [2024-11-19 18:26:26.779344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1932c00 with addr=10.0.0.2, port=4420 00:27:25.428 [2024-11-19 18:26:26.779377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1932c00 is same with the state(6) to be set 00:27:25.428 [2024-11-19 18:26:26.779434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1932c00 (9): Bad file descriptor 00:27:25.428 [2024-11-19 18:26:26.780565] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:25.428 [2024-11-19 18:26:26.780637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:25.428 [2024-11-19 18:26:26.780659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:25.428 [2024-11-19 18:26:26.780682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:25.428 [2024-11-19 18:26:26.780704] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:25.428 [2024-11-19 18:26:26.780720] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:25.428 [2024-11-19 18:26:26.780734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:25.428 [2024-11-19 18:26:26.780756] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:25.428 [2024-11-19 18:26:26.780771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:25.428 18:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.428 18:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.428 18:26:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.369 [2024-11-19 18:26:27.783193] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.369 [2024-11-19 18:26:27.783209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.369 [2024-11-19 18:26:27.783218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.369 [2024-11-19 18:26:27.783223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.369 [2024-11-19 18:26:27.783229] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:26.369 [2024-11-19 18:26:27.783234] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.369 [2024-11-19 18:26:27.783238] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.369 [2024-11-19 18:26:27.783241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.369 [2024-11-19 18:26:27.783258] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:26.369 [2024-11-19 18:26:27.783275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.369 [2024-11-19 18:26:27.783282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.369 [2024-11-19 18:26:27.783289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.369 [2024-11-19 18:26:27.783295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.369 [2024-11-19 18:26:27.783301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.369 [2024-11-19 18:26:27.783306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.369 [2024-11-19 18:26:27.783312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.369 [2024-11-19 18:26:27.783320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.369 [2024-11-19 18:26:27.783326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.369 [2024-11-19 18:26:27.783331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.369 [2024-11-19 18:26:27.783336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:26.369 [2024-11-19 18:26:27.783741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1922340 (9): Bad file descriptor 00:27:26.369 [2024-11-19 18:26:27.784752] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:26.369 [2024-11-19 18:26:27.784760] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:26.369 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.369 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.369 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.369 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.369 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.369 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.369 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.369 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.629 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.630 18:26:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.630 18:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:26.630 18:26:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:27.600 18:26:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.551 [2024-11-19 18:26:29.835340] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:28.551 [2024-11-19 18:26:29.835353] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:28.551 [2024-11-19 18:26:29.835363] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:28.551 [2024-11-19 18:26:29.923616] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:28.812 [2024-11-19 18:26:30.023795] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:28.812 [2024-11-19 18:26:30.024598] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1927130:1 started. 00:27:28.812 [2024-11-19 18:26:30.025515] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:28.812 [2024-11-19 18:26:30.025543] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:28.812 [2024-11-19 18:26:30.025557] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:28.812 [2024-11-19 18:26:30.025569] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:28.812 [2024-11-19 18:26:30.025575] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:28.812 [2024-11-19 18:26:30.032917] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1927130 was disconnected and freed. delete nvme_qpair. 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2127582 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2127582 ']' 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2127582 00:27:28.812 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2127582 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2127582' 00:27:28.813 killing process with pid 2127582 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2127582 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2127582 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.813 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:29.074 rmmod nvme_tcp 00:27:29.074 rmmod nvme_fabrics 00:27:29.074 rmmod nvme_keyring 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2127358 ']' 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2127358 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2127358 ']' 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2127358 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2127358 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2127358' 00:27:29.074 killing process with pid 2127358 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2127358 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2127358 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:29.074 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:29.075 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:29.075 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:29.075 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:29.075 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:29.075 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:29.075 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:29.075 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.075 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.075 18:26:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.620 00:27:31.620 real 0m23.285s 00:27:31.620 user 0m27.259s 00:27:31.620 sys 0m7.104s 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.620 ************************************ 00:27:31.620 END TEST nvmf_discovery_remove_ifc 00:27:31.620 ************************************ 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.620 ************************************ 00:27:31.620 START TEST nvmf_identify_kernel_target 00:27:31.620 ************************************ 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:31.620 * Looking for test storage... 00:27:31.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:31.620 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:31.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.621 --rc genhtml_branch_coverage=1 00:27:31.621 --rc genhtml_function_coverage=1 00:27:31.621 --rc genhtml_legend=1 00:27:31.621 --rc geninfo_all_blocks=1 00:27:31.621 --rc geninfo_unexecuted_blocks=1 00:27:31.621 00:27:31.621 ' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:31.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.621 --rc genhtml_branch_coverage=1 00:27:31.621 --rc genhtml_function_coverage=1 00:27:31.621 --rc genhtml_legend=1 00:27:31.621 --rc geninfo_all_blocks=1 00:27:31.621 --rc geninfo_unexecuted_blocks=1 00:27:31.621 00:27:31.621 ' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:31.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.621 --rc genhtml_branch_coverage=1 00:27:31.621 --rc genhtml_function_coverage=1 00:27:31.621 --rc genhtml_legend=1 00:27:31.621 --rc geninfo_all_blocks=1 00:27:31.621 --rc geninfo_unexecuted_blocks=1 00:27:31.621 00:27:31.621 ' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:31.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:31.621 --rc genhtml_branch_coverage=1 00:27:31.621 --rc genhtml_function_coverage=1 00:27:31.621 --rc genhtml_legend=1 00:27:31.621 --rc geninfo_all_blocks=1 00:27:31.621 --rc geninfo_unexecuted_blocks=1 00:27:31.621 00:27:31.621 ' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:31.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.621 18:26:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:39.762 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:39.763 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:39.763 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:39.763 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:39.763 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:27:39.763 00:27:39.763 --- 10.0.0.2 ping statistics --- 00:27:39.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.763 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:27:39.763 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:27:39.764 00:27:39.764 --- 10.0.0.1 ping statistics --- 00:27:39.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.764 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:39.764 18:26:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:43.067 Waiting for block devices as requested 00:27:43.067 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:43.067 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.067 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.067 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.067 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.067 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:43.067 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:43.067 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:43.328 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:43.329 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:43.589 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.589 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.589 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.850 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.850 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:43.850 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:44.111 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:44.374 No valid GPT data, bailing 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:44.374 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:44.637 00:27:44.637 Discovery Log Number of Records 2, Generation counter 2 00:27:44.637 =====Discovery Log Entry 0====== 00:27:44.637 trtype: tcp 00:27:44.637 adrfam: ipv4 00:27:44.637 subtype: current discovery subsystem 00:27:44.637 treq: not specified, sq flow control disable supported 00:27:44.637 portid: 1 00:27:44.637 trsvcid: 4420 00:27:44.637 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:44.637 traddr: 10.0.0.1 00:27:44.637 eflags: none 00:27:44.637 sectype: none 00:27:44.637 =====Discovery Log Entry 1====== 00:27:44.637 trtype: tcp 00:27:44.637 adrfam: ipv4 00:27:44.637 subtype: nvme subsystem 00:27:44.637 treq: not specified, sq flow control disable supported 00:27:44.637 portid: 1 00:27:44.637 trsvcid: 4420 00:27:44.637 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:44.637 traddr: 10.0.0.1 00:27:44.637 eflags: none 00:27:44.637 sectype: none 00:27:44.637 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:44.637 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:44.637 ===================================================== 00:27:44.637 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:44.637 ===================================================== 00:27:44.637 Controller Capabilities/Features 00:27:44.637 ================================ 00:27:44.637 Vendor ID: 0000 00:27:44.637 Subsystem Vendor ID: 0000 00:27:44.637 Serial Number: 2c23a7a341d192d4e0c0 00:27:44.637 Model Number: Linux 00:27:44.637 Firmware Version: 6.8.9-20 00:27:44.637 Recommended Arb Burst: 0 00:27:44.637 IEEE OUI Identifier: 00 00 00 00:27:44.637 Multi-path I/O 00:27:44.637 May have multiple subsystem ports: No 00:27:44.637 May have multiple controllers: No 00:27:44.637 Associated with SR-IOV VF: No 00:27:44.637 Max Data Transfer Size: Unlimited 00:27:44.637 Max Number of Namespaces: 0 00:27:44.637 Max Number of I/O Queues: 1024 00:27:44.637 NVMe Specification Version (VS): 1.3 00:27:44.637 NVMe Specification Version (Identify): 1.3 00:27:44.637 Maximum Queue Entries: 1024 00:27:44.637 Contiguous Queues Required: No 00:27:44.637 Arbitration Mechanisms Supported 00:27:44.637 Weighted Round Robin: Not Supported 00:27:44.637 Vendor Specific: Not Supported 00:27:44.637 Reset Timeout: 7500 ms 00:27:44.637 Doorbell Stride: 4 bytes 00:27:44.637 NVM Subsystem Reset: Not Supported 00:27:44.637 Command Sets Supported 00:27:44.637 NVM Command Set: Supported 00:27:44.637 Boot Partition: Not Supported 00:27:44.637 Memory Page Size Minimum: 4096 bytes 00:27:44.637 Memory Page Size Maximum: 4096 bytes 00:27:44.637 Persistent Memory Region: Not Supported 00:27:44.637 Optional Asynchronous Events Supported 00:27:44.637 Namespace Attribute Notices: Not Supported 00:27:44.637 Firmware Activation Notices: Not Supported 00:27:44.637 ANA Change Notices: Not Supported 00:27:44.637 PLE Aggregate Log Change Notices: Not Supported 00:27:44.637 LBA Status Info Alert Notices: Not Supported 00:27:44.637 EGE Aggregate Log Change Notices: Not Supported 00:27:44.637 Normal NVM Subsystem Shutdown event: Not Supported 00:27:44.637 Zone Descriptor Change Notices: Not Supported 00:27:44.637 Discovery Log Change Notices: Supported 00:27:44.637 Controller Attributes 00:27:44.637 128-bit Host Identifier: Not Supported 00:27:44.637 Non-Operational Permissive Mode: Not Supported 00:27:44.637 NVM Sets: Not Supported 00:27:44.637 Read Recovery Levels: Not Supported 00:27:44.637 Endurance Groups: Not Supported 00:27:44.637 Predictable Latency Mode: Not Supported 00:27:44.637 Traffic Based Keep ALive: Not Supported 00:27:44.637 Namespace Granularity: Not Supported 00:27:44.637 SQ Associations: Not Supported 00:27:44.637 UUID List: Not Supported 00:27:44.637 Multi-Domain Subsystem: Not Supported 00:27:44.637 Fixed Capacity Management: Not Supported 00:27:44.637 Variable Capacity Management: Not Supported 00:27:44.637 Delete Endurance Group: Not Supported 00:27:44.637 Delete NVM Set: Not Supported 00:27:44.637 Extended LBA Formats Supported: Not Supported 00:27:44.637 Flexible Data Placement Supported: Not Supported 00:27:44.637 00:27:44.637 Controller Memory Buffer Support 00:27:44.637 ================================ 00:27:44.637 Supported: No 00:27:44.637 00:27:44.637 Persistent Memory Region Support 00:27:44.637 ================================ 00:27:44.637 Supported: No 00:27:44.637 00:27:44.637 Admin Command Set Attributes 00:27:44.637 ============================ 00:27:44.637 Security Send/Receive: Not Supported 00:27:44.637 Format NVM: Not Supported 00:27:44.637 Firmware Activate/Download: Not Supported 00:27:44.637 Namespace Management: Not Supported 00:27:44.637 Device Self-Test: Not Supported 00:27:44.637 Directives: Not Supported 00:27:44.637 NVMe-MI: Not Supported 00:27:44.637 Virtualization Management: Not Supported 00:27:44.637 Doorbell Buffer Config: Not Supported 00:27:44.637 Get LBA Status Capability: Not Supported 00:27:44.637 Command & Feature Lockdown Capability: Not Supported 00:27:44.637 Abort Command Limit: 1 00:27:44.637 Async Event Request Limit: 1 00:27:44.637 Number of Firmware Slots: N/A 00:27:44.637 Firmware Slot 1 Read-Only: N/A 00:27:44.637 Firmware Activation Without Reset: N/A 00:27:44.637 Multiple Update Detection Support: N/A 00:27:44.637 Firmware Update Granularity: No Information Provided 00:27:44.637 Per-Namespace SMART Log: No 00:27:44.637 Asymmetric Namespace Access Log Page: Not Supported 00:27:44.637 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:44.637 Command Effects Log Page: Not Supported 00:27:44.637 Get Log Page Extended Data: Supported 00:27:44.637 Telemetry Log Pages: Not Supported 00:27:44.637 Persistent Event Log Pages: Not Supported 00:27:44.637 Supported Log Pages Log Page: May Support 00:27:44.637 Commands Supported & Effects Log Page: Not Supported 00:27:44.637 Feature Identifiers & Effects Log Page:May Support 00:27:44.637 NVMe-MI Commands & Effects Log Page: May Support 00:27:44.637 Data Area 4 for Telemetry Log: Not Supported 00:27:44.637 Error Log Page Entries Supported: 1 00:27:44.637 Keep Alive: Not Supported 00:27:44.637 00:27:44.637 NVM Command Set Attributes 00:27:44.637 ========================== 00:27:44.637 Submission Queue Entry Size 00:27:44.637 Max: 1 00:27:44.637 Min: 1 00:27:44.637 Completion Queue Entry Size 00:27:44.637 Max: 1 00:27:44.637 Min: 1 00:27:44.637 Number of Namespaces: 0 00:27:44.637 Compare Command: Not Supported 00:27:44.637 Write Uncorrectable Command: Not Supported 00:27:44.637 Dataset Management Command: Not Supported 00:27:44.637 Write Zeroes Command: Not Supported 00:27:44.637 Set Features Save Field: Not Supported 00:27:44.637 Reservations: Not Supported 00:27:44.637 Timestamp: Not Supported 00:27:44.637 Copy: Not Supported 00:27:44.637 Volatile Write Cache: Not Present 00:27:44.637 Atomic Write Unit (Normal): 1 00:27:44.637 Atomic Write Unit (PFail): 1 00:27:44.637 Atomic Compare & Write Unit: 1 00:27:44.637 Fused Compare & Write: Not Supported 00:27:44.637 Scatter-Gather List 00:27:44.637 SGL Command Set: Supported 00:27:44.637 SGL Keyed: Not Supported 00:27:44.638 SGL Bit Bucket Descriptor: Not Supported 00:27:44.638 SGL Metadata Pointer: Not Supported 00:27:44.638 Oversized SGL: Not Supported 00:27:44.638 SGL Metadata Address: Not Supported 00:27:44.638 SGL Offset: Supported 00:27:44.638 Transport SGL Data Block: Not Supported 00:27:44.638 Replay Protected Memory Block: Not Supported 00:27:44.638 00:27:44.638 Firmware Slot Information 00:27:44.638 ========================= 00:27:44.638 Active slot: 0 00:27:44.638 00:27:44.638 00:27:44.638 Error Log 00:27:44.638 ========= 00:27:44.638 00:27:44.638 Active Namespaces 00:27:44.638 ================= 00:27:44.638 Discovery Log Page 00:27:44.638 ================== 00:27:44.638 Generation Counter: 2 00:27:44.638 Number of Records: 2 00:27:44.638 Record Format: 0 00:27:44.638 00:27:44.638 Discovery Log Entry 0 00:27:44.638 ---------------------- 00:27:44.638 Transport Type: 3 (TCP) 00:27:44.638 Address Family: 1 (IPv4) 00:27:44.638 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:44.638 Entry Flags: 00:27:44.638 Duplicate Returned Information: 0 00:27:44.638 Explicit Persistent Connection Support for Discovery: 0 00:27:44.638 Transport Requirements: 00:27:44.638 Secure Channel: Not Specified 00:27:44.638 Port ID: 1 (0x0001) 00:27:44.638 Controller ID: 65535 (0xffff) 00:27:44.638 Admin Max SQ Size: 32 00:27:44.638 Transport Service Identifier: 4420 00:27:44.638 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:44.638 Transport Address: 10.0.0.1 00:27:44.638 Discovery Log Entry 1 00:27:44.638 ---------------------- 00:27:44.638 Transport Type: 3 (TCP) 00:27:44.638 Address Family: 1 (IPv4) 00:27:44.638 Subsystem Type: 2 (NVM Subsystem) 00:27:44.638 Entry Flags: 00:27:44.638 Duplicate Returned Information: 0 00:27:44.638 Explicit Persistent Connection Support for Discovery: 0 00:27:44.638 Transport Requirements: 00:27:44.638 Secure Channel: Not Specified 00:27:44.638 Port ID: 1 (0x0001) 00:27:44.638 Controller ID: 65535 (0xffff) 00:27:44.638 Admin Max SQ Size: 32 00:27:44.638 Transport Service Identifier: 4420 00:27:44.638 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:44.638 Transport Address: 10.0.0.1 00:27:44.638 18:26:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:44.638 get_feature(0x01) failed 00:27:44.638 get_feature(0x02) failed 00:27:44.638 get_feature(0x04) failed 00:27:44.638 ===================================================== 00:27:44.638 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:44.638 ===================================================== 00:27:44.638 Controller Capabilities/Features 00:27:44.638 ================================ 00:27:44.638 Vendor ID: 0000 00:27:44.638 Subsystem Vendor ID: 0000 00:27:44.638 Serial Number: b2699d118b7eda254e4c 00:27:44.638 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:44.638 Firmware Version: 6.8.9-20 00:27:44.638 Recommended Arb Burst: 6 00:27:44.638 IEEE OUI Identifier: 00 00 00 00:27:44.638 Multi-path I/O 00:27:44.638 May have multiple subsystem ports: Yes 00:27:44.638 May have multiple controllers: Yes 00:27:44.638 Associated with SR-IOV VF: No 00:27:44.638 Max Data Transfer Size: Unlimited 00:27:44.638 Max Number of Namespaces: 1024 00:27:44.638 Max Number of I/O Queues: 128 00:27:44.638 NVMe Specification Version (VS): 1.3 00:27:44.638 NVMe Specification Version (Identify): 1.3 00:27:44.638 Maximum Queue Entries: 1024 00:27:44.638 Contiguous Queues Required: No 00:27:44.638 Arbitration Mechanisms Supported 00:27:44.638 Weighted Round Robin: Not Supported 00:27:44.638 Vendor Specific: Not Supported 00:27:44.638 Reset Timeout: 7500 ms 00:27:44.638 Doorbell Stride: 4 bytes 00:27:44.638 NVM Subsystem Reset: Not Supported 00:27:44.638 Command Sets Supported 00:27:44.638 NVM Command Set: Supported 00:27:44.638 Boot Partition: Not Supported 00:27:44.638 Memory Page Size Minimum: 4096 bytes 00:27:44.638 Memory Page Size Maximum: 4096 bytes 00:27:44.638 Persistent Memory Region: Not Supported 00:27:44.638 Optional Asynchronous Events Supported 00:27:44.638 Namespace Attribute Notices: Supported 00:27:44.638 Firmware Activation Notices: Not Supported 00:27:44.638 ANA Change Notices: Supported 00:27:44.638 PLE Aggregate Log Change Notices: Not Supported 00:27:44.638 LBA Status Info Alert Notices: Not Supported 00:27:44.638 EGE Aggregate Log Change Notices: Not Supported 00:27:44.638 Normal NVM Subsystem Shutdown event: Not Supported 00:27:44.638 Zone Descriptor Change Notices: Not Supported 00:27:44.638 Discovery Log Change Notices: Not Supported 00:27:44.638 Controller Attributes 00:27:44.638 128-bit Host Identifier: Supported 00:27:44.638 Non-Operational Permissive Mode: Not Supported 00:27:44.638 NVM Sets: Not Supported 00:27:44.638 Read Recovery Levels: Not Supported 00:27:44.638 Endurance Groups: Not Supported 00:27:44.638 Predictable Latency Mode: Not Supported 00:27:44.638 Traffic Based Keep ALive: Supported 00:27:44.638 Namespace Granularity: Not Supported 00:27:44.638 SQ Associations: Not Supported 00:27:44.638 UUID List: Not Supported 00:27:44.638 Multi-Domain Subsystem: Not Supported 00:27:44.638 Fixed Capacity Management: Not Supported 00:27:44.638 Variable Capacity Management: Not Supported 00:27:44.638 Delete Endurance Group: Not Supported 00:27:44.638 Delete NVM Set: Not Supported 00:27:44.638 Extended LBA Formats Supported: Not Supported 00:27:44.638 Flexible Data Placement Supported: Not Supported 00:27:44.638 00:27:44.638 Controller Memory Buffer Support 00:27:44.638 ================================ 00:27:44.638 Supported: No 00:27:44.638 00:27:44.638 Persistent Memory Region Support 00:27:44.638 ================================ 00:27:44.638 Supported: No 00:27:44.638 00:27:44.638 Admin Command Set Attributes 00:27:44.638 ============================ 00:27:44.638 Security Send/Receive: Not Supported 00:27:44.638 Format NVM: Not Supported 00:27:44.638 Firmware Activate/Download: Not Supported 00:27:44.638 Namespace Management: Not Supported 00:27:44.638 Device Self-Test: Not Supported 00:27:44.638 Directives: Not Supported 00:27:44.638 NVMe-MI: Not Supported 00:27:44.638 Virtualization Management: Not Supported 00:27:44.638 Doorbell Buffer Config: Not Supported 00:27:44.638 Get LBA Status Capability: Not Supported 00:27:44.638 Command & Feature Lockdown Capability: Not Supported 00:27:44.638 Abort Command Limit: 4 00:27:44.638 Async Event Request Limit: 4 00:27:44.638 Number of Firmware Slots: N/A 00:27:44.638 Firmware Slot 1 Read-Only: N/A 00:27:44.638 Firmware Activation Without Reset: N/A 00:27:44.638 Multiple Update Detection Support: N/A 00:27:44.638 Firmware Update Granularity: No Information Provided 00:27:44.638 Per-Namespace SMART Log: Yes 00:27:44.638 Asymmetric Namespace Access Log Page: Supported 00:27:44.638 ANA Transition Time : 10 sec 00:27:44.638 00:27:44.638 Asymmetric Namespace Access Capabilities 00:27:44.638 ANA Optimized State : Supported 00:27:44.638 ANA Non-Optimized State : Supported 00:27:44.638 ANA Inaccessible State : Supported 00:27:44.638 ANA Persistent Loss State : Supported 00:27:44.638 ANA Change State : Supported 00:27:44.638 ANAGRPID is not changed : No 00:27:44.638 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:44.638 00:27:44.638 ANA Group Identifier Maximum : 128 00:27:44.638 Number of ANA Group Identifiers : 128 00:27:44.638 Max Number of Allowed Namespaces : 1024 00:27:44.638 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:44.638 Command Effects Log Page: Supported 00:27:44.638 Get Log Page Extended Data: Supported 00:27:44.638 Telemetry Log Pages: Not Supported 00:27:44.638 Persistent Event Log Pages: Not Supported 00:27:44.638 Supported Log Pages Log Page: May Support 00:27:44.638 Commands Supported & Effects Log Page: Not Supported 00:27:44.638 Feature Identifiers & Effects Log Page:May Support 00:27:44.638 NVMe-MI Commands & Effects Log Page: May Support 00:27:44.638 Data Area 4 for Telemetry Log: Not Supported 00:27:44.638 Error Log Page Entries Supported: 128 00:27:44.638 Keep Alive: Supported 00:27:44.638 Keep Alive Granularity: 1000 ms 00:27:44.638 00:27:44.638 NVM Command Set Attributes 00:27:44.638 ========================== 00:27:44.638 Submission Queue Entry Size 00:27:44.638 Max: 64 00:27:44.638 Min: 64 00:27:44.638 Completion Queue Entry Size 00:27:44.638 Max: 16 00:27:44.638 Min: 16 00:27:44.638 Number of Namespaces: 1024 00:27:44.638 Compare Command: Not Supported 00:27:44.639 Write Uncorrectable Command: Not Supported 00:27:44.639 Dataset Management Command: Supported 00:27:44.639 Write Zeroes Command: Supported 00:27:44.639 Set Features Save Field: Not Supported 00:27:44.639 Reservations: Not Supported 00:27:44.639 Timestamp: Not Supported 00:27:44.639 Copy: Not Supported 00:27:44.639 Volatile Write Cache: Present 00:27:44.639 Atomic Write Unit (Normal): 1 00:27:44.639 Atomic Write Unit (PFail): 1 00:27:44.639 Atomic Compare & Write Unit: 1 00:27:44.639 Fused Compare & Write: Not Supported 00:27:44.639 Scatter-Gather List 00:27:44.639 SGL Command Set: Supported 00:27:44.639 SGL Keyed: Not Supported 00:27:44.639 SGL Bit Bucket Descriptor: Not Supported 00:27:44.639 SGL Metadata Pointer: Not Supported 00:27:44.639 Oversized SGL: Not Supported 00:27:44.639 SGL Metadata Address: Not Supported 00:27:44.639 SGL Offset: Supported 00:27:44.639 Transport SGL Data Block: Not Supported 00:27:44.639 Replay Protected Memory Block: Not Supported 00:27:44.639 00:27:44.639 Firmware Slot Information 00:27:44.639 ========================= 00:27:44.639 Active slot: 0 00:27:44.639 00:27:44.639 Asymmetric Namespace Access 00:27:44.639 =========================== 00:27:44.639 Change Count : 0 00:27:44.639 Number of ANA Group Descriptors : 1 00:27:44.639 ANA Group Descriptor : 0 00:27:44.639 ANA Group ID : 1 00:27:44.639 Number of NSID Values : 1 00:27:44.639 Change Count : 0 00:27:44.639 ANA State : 1 00:27:44.639 Namespace Identifier : 1 00:27:44.639 00:27:44.639 Commands Supported and Effects 00:27:44.639 ============================== 00:27:44.639 Admin Commands 00:27:44.639 -------------- 00:27:44.639 Get Log Page (02h): Supported 00:27:44.639 Identify (06h): Supported 00:27:44.639 Abort (08h): Supported 00:27:44.639 Set Features (09h): Supported 00:27:44.639 Get Features (0Ah): Supported 00:27:44.639 Asynchronous Event Request (0Ch): Supported 00:27:44.639 Keep Alive (18h): Supported 00:27:44.639 I/O Commands 00:27:44.639 ------------ 00:27:44.639 Flush (00h): Supported 00:27:44.639 Write (01h): Supported LBA-Change 00:27:44.639 Read (02h): Supported 00:27:44.639 Write Zeroes (08h): Supported LBA-Change 00:27:44.639 Dataset Management (09h): Supported 00:27:44.639 00:27:44.639 Error Log 00:27:44.639 ========= 00:27:44.639 Entry: 0 00:27:44.639 Error Count: 0x3 00:27:44.639 Submission Queue Id: 0x0 00:27:44.639 Command Id: 0x5 00:27:44.639 Phase Bit: 0 00:27:44.639 Status Code: 0x2 00:27:44.639 Status Code Type: 0x0 00:27:44.639 Do Not Retry: 1 00:27:44.639 Error Location: 0x28 00:27:44.639 LBA: 0x0 00:27:44.639 Namespace: 0x0 00:27:44.639 Vendor Log Page: 0x0 00:27:44.639 ----------- 00:27:44.639 Entry: 1 00:27:44.639 Error Count: 0x2 00:27:44.639 Submission Queue Id: 0x0 00:27:44.639 Command Id: 0x5 00:27:44.639 Phase Bit: 0 00:27:44.639 Status Code: 0x2 00:27:44.639 Status Code Type: 0x0 00:27:44.639 Do Not Retry: 1 00:27:44.639 Error Location: 0x28 00:27:44.639 LBA: 0x0 00:27:44.639 Namespace: 0x0 00:27:44.639 Vendor Log Page: 0x0 00:27:44.639 ----------- 00:27:44.639 Entry: 2 00:27:44.639 Error Count: 0x1 00:27:44.639 Submission Queue Id: 0x0 00:27:44.639 Command Id: 0x4 00:27:44.639 Phase Bit: 0 00:27:44.639 Status Code: 0x2 00:27:44.639 Status Code Type: 0x0 00:27:44.639 Do Not Retry: 1 00:27:44.639 Error Location: 0x28 00:27:44.639 LBA: 0x0 00:27:44.639 Namespace: 0x0 00:27:44.639 Vendor Log Page: 0x0 00:27:44.639 00:27:44.639 Number of Queues 00:27:44.639 ================ 00:27:44.639 Number of I/O Submission Queues: 128 00:27:44.639 Number of I/O Completion Queues: 128 00:27:44.639 00:27:44.639 ZNS Specific Controller Data 00:27:44.639 ============================ 00:27:44.639 Zone Append Size Limit: 0 00:27:44.639 00:27:44.639 00:27:44.639 Active Namespaces 00:27:44.639 ================= 00:27:44.639 get_feature(0x05) failed 00:27:44.639 Namespace ID:1 00:27:44.639 Command Set Identifier: NVM (00h) 00:27:44.639 Deallocate: Supported 00:27:44.639 Deallocated/Unwritten Error: Not Supported 00:27:44.639 Deallocated Read Value: Unknown 00:27:44.639 Deallocate in Write Zeroes: Not Supported 00:27:44.639 Deallocated Guard Field: 0xFFFF 00:27:44.639 Flush: Supported 00:27:44.639 Reservation: Not Supported 00:27:44.639 Namespace Sharing Capabilities: Multiple Controllers 00:27:44.639 Size (in LBAs): 3750748848 (1788GiB) 00:27:44.639 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:44.639 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:44.639 UUID: 6825f217-0671-4158-bb3b-6082f59391a3 00:27:44.639 Thin Provisioning: Not Supported 00:27:44.639 Per-NS Atomic Units: Yes 00:27:44.639 Atomic Write Unit (Normal): 8 00:27:44.639 Atomic Write Unit (PFail): 8 00:27:44.639 Preferred Write Granularity: 8 00:27:44.639 Atomic Compare & Write Unit: 8 00:27:44.639 Atomic Boundary Size (Normal): 0 00:27:44.639 Atomic Boundary Size (PFail): 0 00:27:44.639 Atomic Boundary Offset: 0 00:27:44.639 NGUID/EUI64 Never Reused: No 00:27:44.639 ANA group ID: 1 00:27:44.639 Namespace Write Protected: No 00:27:44.639 Number of LBA Formats: 1 00:27:44.639 Current LBA Format: LBA Format #00 00:27:44.639 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:44.639 00:27:44.639 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:44.639 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:44.639 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:44.639 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:44.639 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:44.639 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:44.639 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:44.639 rmmod nvme_tcp 00:27:44.901 rmmod nvme_fabrics 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.901 18:26:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:46.816 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:47.078 18:26:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:50.382 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:50.382 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:50.382 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:50.382 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:50.382 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:50.382 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:50.643 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:51.215 00:27:51.215 real 0m19.708s 00:27:51.215 user 0m5.340s 00:27:51.215 sys 0m11.347s 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.215 ************************************ 00:27:51.215 END TEST nvmf_identify_kernel_target 00:27:51.215 ************************************ 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.215 ************************************ 00:27:51.215 START TEST nvmf_auth_host 00:27:51.215 ************************************ 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:51.215 * Looking for test storage... 00:27:51.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.215 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.477 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:51.477 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.477 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:51.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.477 --rc genhtml_branch_coverage=1 00:27:51.477 --rc genhtml_function_coverage=1 00:27:51.477 --rc genhtml_legend=1 00:27:51.477 --rc geninfo_all_blocks=1 00:27:51.477 --rc geninfo_unexecuted_blocks=1 00:27:51.477 00:27:51.477 ' 00:27:51.477 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:51.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.477 --rc genhtml_branch_coverage=1 00:27:51.477 --rc genhtml_function_coverage=1 00:27:51.477 --rc genhtml_legend=1 00:27:51.477 --rc geninfo_all_blocks=1 00:27:51.477 --rc geninfo_unexecuted_blocks=1 00:27:51.477 00:27:51.477 ' 00:27:51.477 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:51.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.477 --rc genhtml_branch_coverage=1 00:27:51.477 --rc genhtml_function_coverage=1 00:27:51.477 --rc genhtml_legend=1 00:27:51.477 --rc geninfo_all_blocks=1 00:27:51.477 --rc geninfo_unexecuted_blocks=1 00:27:51.477 00:27:51.477 ' 00:27:51.477 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:51.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.477 --rc genhtml_branch_coverage=1 00:27:51.478 --rc genhtml_function_coverage=1 00:27:51.478 --rc genhtml_legend=1 00:27:51.478 --rc geninfo_all_blocks=1 00:27:51.478 --rc geninfo_unexecuted_blocks=1 00:27:51.478 00:27:51.478 ' 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.478 18:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.621 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:59.622 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:59.622 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:59.622 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:59.622 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.622 18:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.622 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:59.622 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.622 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.622 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.622 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:59.622 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:59.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:27:59.622 00:27:59.622 --- 10.0.0.2 ping statistics --- 00:27:59.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.622 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:27:59.622 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:27:59.622 00:27:59.622 --- 10.0.0.1 ping statistics --- 00:27:59.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.622 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:27:59.622 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2141896 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2141896 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2141896 ']' 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.623 18:27:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.623 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.623 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:59.623 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:59.623 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:59.623 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b02e9e08baa5a8387bcd803c857d7e95 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kki 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b02e9e08baa5a8387bcd803c857d7e95 0 00:27:59.884 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b02e9e08baa5a8387bcd803c857d7e95 0 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b02e9e08baa5a8387bcd803c857d7e95 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kki 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kki 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kki 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a9509ab81ed22a9faff0e02504065ece05cfffd83227e90b6c7f13c6140d51d8 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MFz 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a9509ab81ed22a9faff0e02504065ece05cfffd83227e90b6c7f13c6140d51d8 3 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a9509ab81ed22a9faff0e02504065ece05cfffd83227e90b6c7f13c6140d51d8 3 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a9509ab81ed22a9faff0e02504065ece05cfffd83227e90b6c7f13c6140d51d8 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MFz 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MFz 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MFz 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca55cb8f18c8c23220717d040f154ec0095d07a43f6ef11d 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.G1M 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca55cb8f18c8c23220717d040f154ec0095d07a43f6ef11d 0 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca55cb8f18c8c23220717d040f154ec0095d07a43f6ef11d 0 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca55cb8f18c8c23220717d040f154ec0095d07a43f6ef11d 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.G1M 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.G1M 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.G1M 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=167333c59583518e96ca8ba9fa9e762cacc29b474bdf00e6 00:27:59.885 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qq5 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 167333c59583518e96ca8ba9fa9e762cacc29b474bdf00e6 2 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 167333c59583518e96ca8ba9fa9e762cacc29b474bdf00e6 2 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=167333c59583518e96ca8ba9fa9e762cacc29b474bdf00e6 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qq5 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qq5 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.qq5 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d8b3eaafa501a0f60eb66fc48c064ab9 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uNn 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d8b3eaafa501a0f60eb66fc48c064ab9 1 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d8b3eaafa501a0f60eb66fc48c064ab9 1 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d8b3eaafa501a0f60eb66fc48c064ab9 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uNn 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uNn 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.uNn 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1af90fb7992e8ec3deb317f1400b3257 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.KKc 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1af90fb7992e8ec3deb317f1400b3257 1 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1af90fb7992e8ec3deb317f1400b3257 1 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1af90fb7992e8ec3deb317f1400b3257 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.KKc 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.KKc 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.KKc 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=10a2cb792ef9fb817f20adafe8740b4e80e52fb69d0fa444 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ixu 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 10a2cb792ef9fb817f20adafe8740b4e80e52fb69d0fa444 2 00:28:00.147 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 10a2cb792ef9fb817f20adafe8740b4e80e52fb69d0fa444 2 00:28:00.148 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:00.148 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:00.148 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=10a2cb792ef9fb817f20adafe8740b4e80e52fb69d0fa444 00:28:00.148 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:00.148 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:00.148 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ixu 00:28:00.148 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ixu 00:28:00.148 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ixu 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=80e0f0aca2657567a1ae5b91f4d5cce2 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Rnu 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 80e0f0aca2657567a1ae5b91f4d5cce2 0 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 80e0f0aca2657567a1ae5b91f4d5cce2 0 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=80e0f0aca2657567a1ae5b91f4d5cce2 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Rnu 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Rnu 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Rnu 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.412 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d509404a8b60aff0c5120c518d17c79391c8ac1de78153f5e4b8917056598260 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2Ac 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d509404a8b60aff0c5120c518d17c79391c8ac1de78153f5e4b8917056598260 3 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d509404a8b60aff0c5120c518d17c79391c8ac1de78153f5e4b8917056598260 3 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d509404a8b60aff0c5120c518d17c79391c8ac1de78153f5e4b8917056598260 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2Ac 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2Ac 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2Ac 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2141896 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2141896 ']' 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.413 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.414 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.414 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.414 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kki 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MFz ]] 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MFz 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.678 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.G1M 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.qq5 ]] 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qq5 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.uNn 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.KKc ]] 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KKc 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ixu 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.679 18:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Rnu ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Rnu 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2Ac 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:00.679 18:27:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:03.983 Waiting for block devices as requested 00:28:04.244 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:04.244 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:04.244 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:04.244 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:04.505 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:04.505 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:04.505 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:04.766 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:04.767 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:05.028 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:05.028 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:05.028 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:05.288 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:05.288 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:05.288 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:05.288 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:05.550 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:06.492 No valid GPT data, bailing 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:06.492 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:06.492 00:28:06.492 Discovery Log Number of Records 2, Generation counter 2 00:28:06.492 =====Discovery Log Entry 0====== 00:28:06.492 trtype: tcp 00:28:06.492 adrfam: ipv4 00:28:06.492 subtype: current discovery subsystem 00:28:06.492 treq: not specified, sq flow control disable supported 00:28:06.492 portid: 1 00:28:06.492 trsvcid: 4420 00:28:06.492 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:06.492 traddr: 10.0.0.1 00:28:06.492 eflags: none 00:28:06.493 sectype: none 00:28:06.493 =====Discovery Log Entry 1====== 00:28:06.493 trtype: tcp 00:28:06.493 adrfam: ipv4 00:28:06.493 subtype: nvme subsystem 00:28:06.493 treq: not specified, sq flow control disable supported 00:28:06.493 portid: 1 00:28:06.493 trsvcid: 4420 00:28:06.493 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:06.493 traddr: 10.0.0.1 00:28:06.493 eflags: none 00:28:06.493 sectype: none 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.493 18:27:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.755 nvme0n1 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.755 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.017 nvme0n1 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:07.017 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.018 nvme0n1 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.018 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.279 nvme0n1 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.279 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.540 nvme0n1 00:28:07.540 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.541 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.541 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.541 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.541 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.541 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.541 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.541 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.541 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.541 18:27:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.541 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.802 nvme0n1 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:07.802 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.803 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 nvme0n1 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.323 nvme0n1 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.323 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.324 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.585 nvme0n1 00:28:08.585 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.585 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.585 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.585 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.585 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.585 18:27:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.585 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.846 nvme0n1 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.846 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.107 nvme0n1 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.107 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.367 nvme0n1 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.367 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.627 18:27:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.888 nvme0n1 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.888 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.149 nvme0n1 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.149 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.410 nvme0n1 00:28:10.410 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.410 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.410 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.410 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.410 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.410 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.670 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.671 18:27:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.932 nvme0n1 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.932 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.502 nvme0n1 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.502 18:27:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.763 nvme0n1 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.763 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.022 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.023 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.283 nvme0n1 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.283 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.543 18:27:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.803 nvme0n1 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.803 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.373 nvme0n1 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.373 18:27:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.943 nvme0n1 00:28:13.943 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.943 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.943 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.943 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.943 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.203 18:27:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.774 nvme0n1 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.774 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.716 nvme0n1 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:15.716 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.717 18:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.285 nvme0n1 00:28:16.285 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.286 18:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.855 nvme0n1 00:28:16.855 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.855 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.855 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.855 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.855 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.855 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.116 nvme0n1 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.116 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.377 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.378 nvme0n1 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.378 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.640 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.640 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.640 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.640 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.640 nvme0n1 00:28:17.640 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.640 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.640 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.640 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.640 18:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.640 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.641 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.905 nvme0n1 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.905 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.906 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.167 nvme0n1 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.167 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.427 nvme0n1 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.428 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.688 nvme0n1 00:28:18.688 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.688 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.688 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.688 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.689 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.689 18:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.689 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.949 nvme0n1 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:18.949 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.950 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.210 nvme0n1 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.210 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 nvme0n1 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.471 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.472 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.472 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.472 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.472 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.472 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.472 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.472 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.472 18:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.731 nvme0n1 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.731 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.732 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.993 nvme0n1 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.993 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.254 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.514 nvme0n1 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.514 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.515 18:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.775 nvme0n1 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.775 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.776 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.776 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.776 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.037 nvme0n1 00:28:21.037 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.037 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.037 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.037 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.037 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.037 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.299 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.560 nvme0n1 00:28:21.560 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.560 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.560 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.560 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.560 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.560 18:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.560 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.560 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.560 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.560 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.821 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.822 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.822 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.822 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.822 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.082 nvme0n1 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:22.082 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.083 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.343 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.343 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.604 nvme0n1 00:28:22.604 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.604 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.604 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.604 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.605 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.605 18:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.605 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.178 nvme0n1 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.178 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.179 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.179 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.179 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.179 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:23.179 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.179 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.749 nvme0n1 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.749 18:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:23.749 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.750 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.322 nvme0n1 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.322 18:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.266 nvme0n1 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.266 18:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.837 nvme0n1 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.837 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.838 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.407 nvme0n1 00:28:26.407 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.407 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.407 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.407 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.407 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.407 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.668 18:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.240 nvme0n1 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.240 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.502 nvme0n1 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.502 18:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.762 nvme0n1 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.762 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.763 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.023 nvme0n1 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.023 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.284 nvme0n1 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.284 nvme0n1 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.284 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.545 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.546 nvme0n1 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.546 18:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.546 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.807 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.808 nvme0n1 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.808 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.069 nvme0n1 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.069 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.330 nvme0n1 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.330 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.591 18:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.591 nvme0n1 00:28:29.591 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.591 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.591 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.591 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.591 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.852 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.853 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.114 nvme0n1 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.114 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.374 nvme0n1 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.374 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.375 18:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.635 nvme0n1 00:28:30.635 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.635 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.635 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.635 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.635 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.635 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.896 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.896 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.896 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.897 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.158 nvme0n1 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.158 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.159 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.420 nvme0n1 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.420 18:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.993 nvme0n1 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.993 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.994 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.994 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.994 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.994 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.994 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.994 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.994 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.994 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.646 nvme0n1 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.646 18:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.937 nvme0n1 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.937 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.586 nvme0n1 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.586 18:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.848 nvme0n1 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.848 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjAyZTllMDhiYWE1YTgzODdiY2Q4MDNjODU3ZDdlOTVFI1Ik: 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: ]] 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk1MDlhYjgxZWQyMmE5ZmFmZjBlMDI1MDQwNjVlY2UwNWNmZmZkODMyMjdlOTBiNmM3ZjEzYzYxNDBkNTFkOM+tFXw=: 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.109 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.110 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.110 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.110 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.110 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.110 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.110 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.110 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.110 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.110 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.682 nvme0n1 00:28:34.682 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.682 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.682 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.682 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.682 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.682 18:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:34.682 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.683 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.255 nvme0n1 00:28:35.255 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.255 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.255 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.255 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.255 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.255 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.516 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.517 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.517 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.517 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.517 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.517 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.517 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.517 18:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.089 nvme0n1 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTBhMmNiNzkyZWY5ZmI4MTdmMjBhZGFmZTg3NDBiNGU4MGU1MmZiNjlkMGZhNDQ0Ut52Jw==: 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: ]] 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODBlMGYwYWNhMjY1NzU2N2ExYWU1YjkxZjRkNWNjZTKG7Q11: 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.089 18:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.033 nvme0n1 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDUwOTQwNGE4YjYwYWZmMGM1MTIwYzUxOGQxN2M3OTM5MWM4YWMxZGU3ODE1M2Y1ZTRiODkxNzA1NjU5ODI2MJIObgo=: 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.033 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.606 nvme0n1 00:28:37.606 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.606 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.606 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.606 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.606 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.607 18:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.607 request: 00:28:37.607 { 00:28:37.607 "name": "nvme0", 00:28:37.607 "trtype": "tcp", 00:28:37.607 "traddr": "10.0.0.1", 00:28:37.607 "adrfam": "ipv4", 00:28:37.607 "trsvcid": "4420", 00:28:37.607 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:37.607 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:37.607 "prchk_reftag": false, 00:28:37.607 "prchk_guard": false, 00:28:37.607 "hdgst": false, 00:28:37.607 "ddgst": false, 00:28:37.607 "allow_unrecognized_csi": false, 00:28:37.607 "method": "bdev_nvme_attach_controller", 00:28:37.607 "req_id": 1 00:28:37.607 } 00:28:37.607 Got JSON-RPC error response 00:28:37.607 response: 00:28:37.607 { 00:28:37.607 "code": -5, 00:28:37.607 "message": "Input/output error" 00:28:37.607 } 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.607 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.869 request: 00:28:37.869 { 00:28:37.869 "name": "nvme0", 00:28:37.869 "trtype": "tcp", 00:28:37.869 "traddr": "10.0.0.1", 00:28:37.869 "adrfam": "ipv4", 00:28:37.869 "trsvcid": "4420", 00:28:37.869 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:37.869 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:37.869 "prchk_reftag": false, 00:28:37.869 "prchk_guard": false, 00:28:37.869 "hdgst": false, 00:28:37.869 "ddgst": false, 00:28:37.869 "dhchap_key": "key2", 00:28:37.869 "allow_unrecognized_csi": false, 00:28:37.869 "method": "bdev_nvme_attach_controller", 00:28:37.869 "req_id": 1 00:28:37.869 } 00:28:37.869 Got JSON-RPC error response 00:28:37.869 response: 00:28:37.869 { 00:28:37.869 "code": -5, 00:28:37.869 "message": "Input/output error" 00:28:37.869 } 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.869 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.870 request: 00:28:37.870 { 00:28:37.870 "name": "nvme0", 00:28:37.870 "trtype": "tcp", 00:28:37.870 "traddr": "10.0.0.1", 00:28:37.870 "adrfam": "ipv4", 00:28:37.870 "trsvcid": "4420", 00:28:37.870 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:37.870 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:37.870 "prchk_reftag": false, 00:28:37.870 "prchk_guard": false, 00:28:37.870 "hdgst": false, 00:28:37.870 "ddgst": false, 00:28:37.870 "dhchap_key": "key1", 00:28:37.870 "dhchap_ctrlr_key": "ckey2", 00:28:37.870 "allow_unrecognized_csi": false, 00:28:37.870 "method": "bdev_nvme_attach_controller", 00:28:37.870 "req_id": 1 00:28:37.870 } 00:28:37.870 Got JSON-RPC error response 00:28:37.870 response: 00:28:37.870 { 00:28:37.870 "code": -5, 00:28:37.870 "message": "Input/output error" 00:28:37.870 } 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.870 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.131 nvme0n1 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.132 request: 00:28:38.132 { 00:28:38.132 "name": "nvme0", 00:28:38.132 "dhchap_key": "key1", 00:28:38.132 "dhchap_ctrlr_key": "ckey2", 00:28:38.132 "method": "bdev_nvme_set_keys", 00:28:38.132 "req_id": 1 00:28:38.132 } 00:28:38.132 Got JSON-RPC error response 00:28:38.132 response: 00:28:38.132 { 00:28:38.132 "code": -13, 00:28:38.132 "message": "Permission denied" 00:28:38.132 } 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.132 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.393 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.393 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:38.393 18:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:39.335 18:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.335 18:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:39.335 18:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.335 18:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.335 18:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.335 18:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:39.335 18:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:40.278 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E1NWNiOGYxOGM4YzIzMjIwNzE3ZDA0MGYxNTRlYzAwOTVkMDdhNDNmNmVmMTFkV1supA==: 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: ]] 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTY3MzMzYzU5NTgzNTE4ZTk2Y2E4YmE5ZmE5ZTc2MmNhY2MyOWI0NzRiZGYwMGU2GrmsxQ==: 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.541 nvme0n1 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDhiM2VhYWZhNTAxYTBmNjBlYjY2ZmM0OGMwNjRhYjnykBg+: 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: ]] 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWFmOTBmYjc5OTJlOGVjM2RlYjMxN2YxNDAwYjMyNTfLyTGc: 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.541 request: 00:28:40.541 { 00:28:40.541 "name": "nvme0", 00:28:40.541 "dhchap_key": "key2", 00:28:40.541 "dhchap_ctrlr_key": "ckey1", 00:28:40.541 "method": "bdev_nvme_set_keys", 00:28:40.541 "req_id": 1 00:28:40.541 } 00:28:40.541 Got JSON-RPC error response 00:28:40.541 response: 00:28:40.541 { 00:28:40.541 "code": -13, 00:28:40.541 "message": "Permission denied" 00:28:40.541 } 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.541 18:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.801 18:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:40.802 18:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.744 rmmod nvme_tcp 00:28:41.744 rmmod nvme_fabrics 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2141896 ']' 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2141896 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2141896 ']' 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2141896 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2141896 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2141896' 00:28:41.744 killing process with pid 2141896 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2141896 00:28:41.744 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2141896 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.006 18:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.921 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.921 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:43.921 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:43.921 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:43.921 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:43.921 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:44.182 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:44.182 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:44.182 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:44.182 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:44.182 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:44.182 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:44.182 18:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:47.488 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:47.488 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:47.488 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:47.750 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:48.323 18:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kki /tmp/spdk.key-null.G1M /tmp/spdk.key-sha256.uNn /tmp/spdk.key-sha384.Ixu /tmp/spdk.key-sha512.2Ac /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:48.323 18:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:51.628 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:51.628 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:51.628 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:51.890 00:28:51.890 real 1m0.846s 00:28:51.890 user 0m54.554s 00:28:51.890 sys 0m16.217s 00:28:51.890 18:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.890 18:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.890 ************************************ 00:28:51.890 END TEST nvmf_auth_host 00:28:51.890 ************************************ 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.151 ************************************ 00:28:52.151 START TEST nvmf_digest 00:28:52.151 ************************************ 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:52.151 * Looking for test storage... 00:28:52.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:52.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.151 --rc genhtml_branch_coverage=1 00:28:52.151 --rc genhtml_function_coverage=1 00:28:52.151 --rc genhtml_legend=1 00:28:52.151 --rc geninfo_all_blocks=1 00:28:52.151 --rc geninfo_unexecuted_blocks=1 00:28:52.151 00:28:52.151 ' 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:52.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.151 --rc genhtml_branch_coverage=1 00:28:52.151 --rc genhtml_function_coverage=1 00:28:52.151 --rc genhtml_legend=1 00:28:52.151 --rc geninfo_all_blocks=1 00:28:52.151 --rc geninfo_unexecuted_blocks=1 00:28:52.151 00:28:52.151 ' 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:52.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.151 --rc genhtml_branch_coverage=1 00:28:52.151 --rc genhtml_function_coverage=1 00:28:52.151 --rc genhtml_legend=1 00:28:52.151 --rc geninfo_all_blocks=1 00:28:52.151 --rc geninfo_unexecuted_blocks=1 00:28:52.151 00:28:52.151 ' 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:52.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.151 --rc genhtml_branch_coverage=1 00:28:52.151 --rc genhtml_function_coverage=1 00:28:52.151 --rc genhtml_legend=1 00:28:52.151 --rc geninfo_all_blocks=1 00:28:52.151 --rc geninfo_unexecuted_blocks=1 00:28:52.151 00:28:52.151 ' 00:28:52.151 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:52.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.413 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.414 18:27:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.561 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:00.562 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:00.562 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:00.562 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:00.562 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.562 18:28:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:29:00.562 00:29:00.562 --- 10.0.0.2 ping statistics --- 00:29:00.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.562 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:29:00.562 00:29:00.562 --- 10.0.0.1 ping statistics --- 00:29:00.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.562 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:00.562 ************************************ 00:29:00.562 START TEST nvmf_digest_clean 00:29:00.562 ************************************ 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:00.562 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2159443 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2159443 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2159443 ']' 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.563 [2024-11-19 18:28:01.280332] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:00.563 [2024-11-19 18:28:01.280395] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.563 [2024-11-19 18:28:01.354301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.563 [2024-11-19 18:28:01.400683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.563 [2024-11-19 18:28:01.400732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.563 [2024-11-19 18:28:01.400739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.563 [2024-11-19 18:28:01.400744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.563 [2024-11-19 18:28:01.400749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.563 [2024-11-19 18:28:01.401397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.563 null0 00:29:00.563 [2024-11-19 18:28:01.601559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.563 [2024-11-19 18:28:01.625847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2159462 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2159462 /var/tmp/bperf.sock 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2159462 ']' 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.563 18:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.563 [2024-11-19 18:28:01.684662] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:00.563 [2024-11-19 18:28:01.684726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159462 ] 00:29:00.563 [2024-11-19 18:28:01.775255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.563 [2024-11-19 18:28:01.827850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.136 18:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.136 18:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:01.136 18:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:01.136 18:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:01.136 18:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.398 18:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.398 18:28:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.659 nvme0n1 00:29:01.659 18:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:01.659 18:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.920 Running I/O for 2 seconds... 00:29:03.804 20123.00 IOPS, 78.61 MiB/s [2024-11-19T17:28:05.275Z] 20847.00 IOPS, 81.43 MiB/s 00:29:03.804 Latency(us) 00:29:03.804 [2024-11-19T17:28:05.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:03.804 nvme0n1 : 2.00 20872.15 81.53 0.00 0.00 6126.21 2798.93 17148.59 00:29:03.804 [2024-11-19T17:28:05.275Z] =================================================================================================================== 00:29:03.804 [2024-11-19T17:28:05.275Z] Total : 20872.15 81.53 0.00 0.00 6126.21 2798.93 17148.59 00:29:03.804 { 00:29:03.804 "results": [ 00:29:03.804 { 00:29:03.804 "job": "nvme0n1", 00:29:03.804 "core_mask": "0x2", 00:29:03.804 "workload": "randread", 00:29:03.804 "status": "finished", 00:29:03.804 "queue_depth": 128, 00:29:03.804 "io_size": 4096, 00:29:03.804 "runtime": 2.003723, 00:29:03.804 "iops": 20872.146499291568, 00:29:03.804 "mibps": 81.53182226285769, 00:29:03.804 "io_failed": 0, 00:29:03.804 "io_timeout": 0, 00:29:03.804 "avg_latency_us": 6126.213319305628, 00:29:03.804 "min_latency_us": 2798.9333333333334, 00:29:03.804 "max_latency_us": 17148.586666666666 00:29:03.804 } 00:29:03.804 ], 00:29:03.804 "core_count": 1 00:29:03.804 } 00:29:03.804 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:03.804 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:03.804 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:03.804 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:03.804 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:03.804 | select(.opcode=="crc32c") 00:29:03.804 | "\(.module_name) \(.executed)"' 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2159462 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2159462 ']' 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2159462 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159462 00:29:04.066 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159462' 00:29:04.067 killing process with pid 2159462 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2159462 00:29:04.067 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.067 00:29:04.067 Latency(us) 00:29:04.067 [2024-11-19T17:28:05.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.067 [2024-11-19T17:28:05.538Z] =================================================================================================================== 00:29:04.067 [2024-11-19T17:28:05.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2159462 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2160147 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2160147 /var/tmp/bperf.sock 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2160147 ']' 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.067 18:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:04.327 [2024-11-19 18:28:05.559911] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:04.327 [2024-11-19 18:28:05.559967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160147 ] 00:29:04.327 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.327 Zero copy mechanism will not be used. 00:29:04.327 [2024-11-19 18:28:05.643711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.327 [2024-11-19 18:28:05.673225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.898 18:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.898 18:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:04.898 18:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:04.898 18:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:04.899 18:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:05.159 18:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.159 18:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.420 nvme0n1 00:29:05.420 18:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:05.420 18:28:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.680 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:05.680 Zero copy mechanism will not be used. 00:29:05.680 Running I/O for 2 seconds... 00:29:07.564 3840.00 IOPS, 480.00 MiB/s [2024-11-19T17:28:09.035Z] 3609.50 IOPS, 451.19 MiB/s 00:29:07.564 Latency(us) 00:29:07.564 [2024-11-19T17:28:09.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.564 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:07.564 nvme0n1 : 2.01 3605.91 450.74 0.00 0.00 4434.19 549.55 9393.49 00:29:07.564 [2024-11-19T17:28:09.035Z] =================================================================================================================== 00:29:07.564 [2024-11-19T17:28:09.035Z] Total : 3605.91 450.74 0.00 0.00 4434.19 549.55 9393.49 00:29:07.564 { 00:29:07.564 "results": [ 00:29:07.564 { 00:29:07.564 "job": "nvme0n1", 00:29:07.564 "core_mask": "0x2", 00:29:07.564 "workload": "randread", 00:29:07.564 "status": "finished", 00:29:07.564 "queue_depth": 16, 00:29:07.564 "io_size": 131072, 00:29:07.564 "runtime": 2.006429, 00:29:07.564 "iops": 3605.9088061426546, 00:29:07.564 "mibps": 450.7386007678318, 00:29:07.564 "io_failed": 0, 00:29:07.564 "io_timeout": 0, 00:29:07.564 "avg_latency_us": 4434.185376641327, 00:29:07.564 "min_latency_us": 549.5466666666666, 00:29:07.564 "max_latency_us": 9393.493333333334 00:29:07.564 } 00:29:07.564 ], 00:29:07.564 "core_count": 1 00:29:07.564 } 00:29:07.564 18:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:07.564 18:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:07.564 18:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:07.564 18:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:07.564 | select(.opcode=="crc32c") 00:29:07.564 | "\(.module_name) \(.executed)"' 00:29:07.564 18:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2160147 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2160147 ']' 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2160147 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160147 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160147' 00:29:07.826 killing process with pid 2160147 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2160147 00:29:07.826 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.826 00:29:07.826 Latency(us) 00:29:07.826 [2024-11-19T17:28:09.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.826 [2024-11-19T17:28:09.297Z] =================================================================================================================== 00:29:07.826 [2024-11-19T17:28:09.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.826 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2160147 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2160897 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2160897 /var/tmp/bperf.sock 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2160897 ']' 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.086 18:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.086 [2024-11-19 18:28:09.350191] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:08.086 [2024-11-19 18:28:09.350251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160897 ] 00:29:08.086 [2024-11-19 18:28:09.431684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.086 [2024-11-19 18:28:09.461210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.025 18:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.025 18:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:09.026 18:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:09.026 18:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:09.026 18:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:09.026 18:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.026 18:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.285 nvme0n1 00:29:09.546 18:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:09.546 18:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.546 Running I/O for 2 seconds... 00:29:11.430 30224.00 IOPS, 118.06 MiB/s [2024-11-19T17:28:12.901Z] 30332.00 IOPS, 118.48 MiB/s 00:29:11.430 Latency(us) 00:29:11.430 [2024-11-19T17:28:12.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.430 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.430 nvme0n1 : 2.00 30332.12 118.48 0.00 0.00 4213.44 2075.31 15073.28 00:29:11.430 [2024-11-19T17:28:12.901Z] =================================================================================================================== 00:29:11.430 [2024-11-19T17:28:12.901Z] Total : 30332.12 118.48 0.00 0.00 4213.44 2075.31 15073.28 00:29:11.430 { 00:29:11.430 "results": [ 00:29:11.430 { 00:29:11.430 "job": "nvme0n1", 00:29:11.430 "core_mask": "0x2", 00:29:11.430 "workload": "randwrite", 00:29:11.430 "status": "finished", 00:29:11.430 "queue_depth": 128, 00:29:11.430 "io_size": 4096, 00:29:11.430 "runtime": 2.004212, 00:29:11.430 "iops": 30332.120554113037, 00:29:11.430 "mibps": 118.48484591450405, 00:29:11.430 "io_failed": 0, 00:29:11.430 "io_timeout": 0, 00:29:11.430 "avg_latency_us": 4213.44377593543, 00:29:11.430 "min_latency_us": 2075.306666666667, 00:29:11.430 "max_latency_us": 15073.28 00:29:11.430 } 00:29:11.430 ], 00:29:11.430 "core_count": 1 00:29:11.430 } 00:29:11.430 18:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:11.430 18:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:11.430 18:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:11.430 18:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:11.430 | select(.opcode=="crc32c") 00:29:11.431 | "\(.module_name) \(.executed)"' 00:29:11.431 18:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2160897 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2160897 ']' 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2160897 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160897 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160897' 00:29:11.691 killing process with pid 2160897 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2160897 00:29:11.691 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.691 00:29:11.691 Latency(us) 00:29:11.691 [2024-11-19T17:28:13.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.691 [2024-11-19T17:28:13.162Z] =================================================================================================================== 00:29:11.691 [2024-11-19T17:28:13.162Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.691 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2160897 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2161759 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2161759 /var/tmp/bperf.sock 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2161759 ']' 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:11.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.953 18:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:11.953 [2024-11-19 18:28:13.276729] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:11.953 [2024-11-19 18:28:13.276786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161759 ] 00:29:11.953 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.953 Zero copy mechanism will not be used. 00:29:11.953 [2024-11-19 18:28:13.361364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.953 [2024-11-19 18:28:13.390779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.895 18:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.895 18:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:12.895 18:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:12.895 18:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:12.895 18:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:12.895 18:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.895 18:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.465 nvme0n1 00:29:13.465 18:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:13.465 18:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.465 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.465 Zero copy mechanism will not be used. 00:29:13.465 Running I/O for 2 seconds... 00:29:15.347 6503.00 IOPS, 812.88 MiB/s [2024-11-19T17:28:16.818Z] 6712.50 IOPS, 839.06 MiB/s 00:29:15.347 Latency(us) 00:29:15.347 [2024-11-19T17:28:16.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.347 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:15.347 nvme0n1 : 2.01 6702.68 837.83 0.00 0.00 2381.97 1133.23 12506.45 00:29:15.347 [2024-11-19T17:28:16.818Z] =================================================================================================================== 00:29:15.347 [2024-11-19T17:28:16.818Z] Total : 6702.68 837.83 0.00 0.00 2381.97 1133.23 12506.45 00:29:15.347 { 00:29:15.347 "results": [ 00:29:15.347 { 00:29:15.347 "job": "nvme0n1", 00:29:15.347 "core_mask": "0x2", 00:29:15.347 "workload": "randwrite", 00:29:15.347 "status": "finished", 00:29:15.347 "queue_depth": 16, 00:29:15.347 "io_size": 131072, 00:29:15.347 "runtime": 2.005766, 00:29:15.347 "iops": 6702.676184559914, 00:29:15.347 "mibps": 837.8345230699892, 00:29:15.347 "io_failed": 0, 00:29:15.347 "io_timeout": 0, 00:29:15.347 "avg_latency_us": 2381.9668908063077, 00:29:15.347 "min_latency_us": 1133.2266666666667, 00:29:15.347 "max_latency_us": 12506.453333333333 00:29:15.347 } 00:29:15.347 ], 00:29:15.347 "core_count": 1 00:29:15.347 } 00:29:15.347 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:15.347 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:15.347 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:15.347 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:15.347 | select(.opcode=="crc32c") 00:29:15.347 | "\(.module_name) \(.executed)"' 00:29:15.347 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2161759 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2161759 ']' 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2161759 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.608 18:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2161759 00:29:15.608 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.608 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.608 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2161759' 00:29:15.608 killing process with pid 2161759 00:29:15.608 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2161759 00:29:15.608 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.608 00:29:15.608 Latency(us) 00:29:15.608 [2024-11-19T17:28:17.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.608 [2024-11-19T17:28:17.079Z] =================================================================================================================== 00:29:15.608 [2024-11-19T17:28:17.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.608 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2161759 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2159443 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2159443 ']' 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2159443 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159443 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159443' 00:29:15.869 killing process with pid 2159443 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2159443 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2159443 00:29:15.869 00:29:15.869 real 0m16.100s 00:29:15.869 user 0m32.347s 00:29:15.869 sys 0m3.732s 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.869 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:15.869 ************************************ 00:29:15.869 END TEST nvmf_digest_clean 00:29:15.869 ************************************ 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:16.130 ************************************ 00:29:16.130 START TEST nvmf_digest_error 00:29:16.130 ************************************ 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2162548 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2162548 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2162548 ']' 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.130 18:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.130 [2024-11-19 18:28:17.465809] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:16.130 [2024-11-19 18:28:17.465868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.130 [2024-11-19 18:28:17.559680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.130 [2024-11-19 18:28:17.592723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.130 [2024-11-19 18:28:17.592753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.130 [2024-11-19 18:28:17.592759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.130 [2024-11-19 18:28:17.592764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.130 [2024-11-19 18:28:17.592768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.130 [2024-11-19 18:28:17.593259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.071 [2024-11-19 18:28:18.311238] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.071 null0 00:29:17.071 [2024-11-19 18:28:18.388660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.071 [2024-11-19 18:28:18.412839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2162768 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2162768 /var/tmp/bperf.sock 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2162768 ']' 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.071 18:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.071 [2024-11-19 18:28:18.475593] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:17.071 [2024-11-19 18:28:18.475683] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162768 ] 00:29:17.331 [2024-11-19 18:28:18.561767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.331 [2024-11-19 18:28:18.591514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.902 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.902 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:17.902 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:17.902 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.161 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:18.161 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.161 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.161 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.161 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.161 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.423 nvme0n1 00:29:18.423 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:18.423 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.423 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.423 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.423 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:18.423 18:28:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.423 Running I/O for 2 seconds... 00:29:18.423 [2024-11-19 18:28:19.817733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.423 [2024-11-19 18:28:19.817764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.423 [2024-11-19 18:28:19.817777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.423 [2024-11-19 18:28:19.829018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.423 [2024-11-19 18:28:19.829039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.423 [2024-11-19 18:28:19.829046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.423 [2024-11-19 18:28:19.838210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.423 [2024-11-19 18:28:19.838229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.423 [2024-11-19 18:28:19.838236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.423 [2024-11-19 18:28:19.846279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.423 [2024-11-19 18:28:19.846298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.423 [2024-11-19 18:28:19.846305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.423 [2024-11-19 18:28:19.856031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.423 [2024-11-19 18:28:19.856049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.423 [2024-11-19 18:28:19.856056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.423 [2024-11-19 18:28:19.865510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.423 [2024-11-19 18:28:19.865528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.423 [2024-11-19 18:28:19.865535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.423 [2024-11-19 18:28:19.874080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.423 [2024-11-19 18:28:19.874099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.423 [2024-11-19 18:28:19.874105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.423 [2024-11-19 18:28:19.883094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.423 [2024-11-19 18:28:19.883112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.423 [2024-11-19 18:28:19.883119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.892062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.892080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.892087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.901078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.901100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.901107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.910320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.910338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.910345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.918956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.918973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.918980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.928267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.928285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.928291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.937611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.937629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.937635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.945177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.945195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.945202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.954656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.954674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.954681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.964406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.964424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.964430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.975689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.975708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.684 [2024-11-19 18:28:19.975715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.684 [2024-11-19 18:28:19.983597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.684 [2024-11-19 18:28:19.983615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:19.983622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:19.993171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:19.993189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:19.993196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.002632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.002651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.002658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.011373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.011391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.011398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.019578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.019596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.019602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.029477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.029494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.029501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.038674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.038691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.038698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.047672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.047689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.047696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.056559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.056580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.056587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.065532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.065550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.065556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.074317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.074335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.074341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.083058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.083075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.083082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.092450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.092468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.092474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.100604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.100621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.100628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.108891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.108908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.108915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.119507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.119524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.119531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.129994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.130012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.130018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.137597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.137615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.137622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.685 [2024-11-19 18:28:20.146766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.685 [2024-11-19 18:28:20.146783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.685 [2024-11-19 18:28:20.146790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.155377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.155395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.155402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.164484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.164501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.164508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.173100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.173117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.173123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.182102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.182121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.182128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.191006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.191024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.191030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.199577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.199595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.199601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.208756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.208774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.208784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.218186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.218203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.218210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.227255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.227272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.227279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.235870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.235888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.235895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.244739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.244757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.244764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.253792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.253809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.253816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.262721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.262737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.262744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.272064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.272082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.272089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.281645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.281662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.281669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.290755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.290776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.290783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.299043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.299060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.299067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.308400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.308417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.947 [2024-11-19 18:28:20.308424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.947 [2024-11-19 18:28:20.317196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.947 [2024-11-19 18:28:20.317213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.317220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.326225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.326243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.326249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.335242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.335259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.335266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.343370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.343387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.343393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.351685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.351703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.351710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.360820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.360837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.360844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.369999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.370017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.370023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.379139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.379157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.379168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.391384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.391402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.391408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.399403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.399421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.399428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.948 [2024-11-19 18:28:20.410224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:18.948 [2024-11-19 18:28:20.410242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.948 [2024-11-19 18:28:20.410248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.418808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.418826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.418833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.427280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.427297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.427304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.436488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.436505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.436512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.445885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.445903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.445913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.454864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.454881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.454889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.462813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.462831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.462837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.472298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.472316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.472324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.481850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.481868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.481875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.489879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.489896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.489903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.499275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.209 [2024-11-19 18:28:20.499292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.209 [2024-11-19 18:28:20.499298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.209 [2024-11-19 18:28:20.507294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.507311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.507318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.516580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.516598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.516605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.528708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.528725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.528732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.540564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.540581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.540587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.549166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.549183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.549190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.557770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.557787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.557794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.566528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.566544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.566551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.575858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.575874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.575881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.584041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.584058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.584064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.592830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.592847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.592853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.601636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.601654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.601663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.610427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.610444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.610450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.619163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.619181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.619187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.628672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.628689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.628695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.636292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.636309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.636315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.645851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.645868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.645875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.654418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.654435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.654441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.663365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.663381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.663388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.210 [2024-11-19 18:28:20.671859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.210 [2024-11-19 18:28:20.671876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.210 [2024-11-19 18:28:20.671883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.680799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.680820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.680827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.689785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.689801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.689807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.699029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.699047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.699053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.707017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.707034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.707041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.717867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.717884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.717891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.728343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.728360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.728366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.738239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.738256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.738263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.746036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.746053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.746060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.755771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.755788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.755795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.765134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.765151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.765161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.774203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.774220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.774227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.782844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.472 [2024-11-19 18:28:20.782860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.472 [2024-11-19 18:28:20.782867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.472 [2024-11-19 18:28:20.791919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.791936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.791942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.800899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.800916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.800923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 27940.00 IOPS, 109.14 MiB/s [2024-11-19T17:28:20.944Z] [2024-11-19 18:28:20.809023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.809039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.809046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.818607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.818624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.818631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.828122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.828139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.828145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.836482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.836498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.836508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.845149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.845170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.845177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.854164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.854181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.854188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.863810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.863827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.863833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.871970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.871987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.871993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.881213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.881230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.881236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.888750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.888767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.888773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.899362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.899379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.899385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.907083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.907100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.907107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.917771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.917788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.917795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.926897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.926914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.926920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.473 [2024-11-19 18:28:20.936532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.473 [2024-11-19 18:28:20.936550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.473 [2024-11-19 18:28:20.936556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.734 [2024-11-19 18:28:20.945948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:20.945965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:20.945972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:20.954782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:20.954799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:20.954806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:20.964326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:20.964343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:20.964350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:20.973372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:20.973388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:20.973395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:20.982393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:20.982410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:20.982416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:20.990208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:20.990225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:20.990234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:20.999551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:20.999568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:20.999574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.007731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.007748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.007754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.016826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.016843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.016850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.025192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.025209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.025215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.035085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.035102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.035108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.043661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.043678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.043685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.051860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.051877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.051883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.060394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.060411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.060417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.069842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.069861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.069868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.079988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.080005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.080011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.087303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.087320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.087326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.097928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.097945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.097951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.105882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.105900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.105906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.115559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.115576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.115583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.124147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.124169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.124176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.133783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.133800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.133807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.143985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.144002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.144008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.151756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.151773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.151780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.160107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.160125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.160131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.169700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.169717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.735 [2024-11-19 18:28:21.169723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.735 [2024-11-19 18:28:21.178877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.735 [2024-11-19 18:28:21.178894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.736 [2024-11-19 18:28:21.178901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.736 [2024-11-19 18:28:21.187009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.736 [2024-11-19 18:28:21.187026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.736 [2024-11-19 18:28:21.187032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.736 [2024-11-19 18:28:21.196155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.736 [2024-11-19 18:28:21.196176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.736 [2024-11-19 18:28:21.196183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.206433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.206451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.206457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.215301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.215318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.215324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.223798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.223815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.223824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.233785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.233802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.233809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.244353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.244370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.244376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.253361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.253378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.253385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.263125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.263142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.263149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.273267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.273285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.273292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.280686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.280703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.280710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.291125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.291142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.291149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.300236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.300253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.300259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.308629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.308646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.308653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.317657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.317674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.317680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.325924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.325941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.325948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.997 [2024-11-19 18:28:21.335607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.997 [2024-11-19 18:28:21.335624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.997 [2024-11-19 18:28:21.335630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.344299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.344315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.344322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.353393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.353409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.353416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.361521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.361538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.361544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.369827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.369844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.369851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.380748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.380766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.380776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.391953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.391970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.391976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.399834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.399851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.399858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.410447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.410465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.410471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.419879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.419897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.419903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.428995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.429012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.429019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.436684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.436701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.436707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.446278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.446295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.446302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:19.998 [2024-11-19 18:28:21.455543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:19.998 [2024-11-19 18:28:21.455560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.998 [2024-11-19 18:28:21.455566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.464464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.464485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.464491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.473214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.473232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.473238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.481841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.481858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.481864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.491567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.491585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.491592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.499610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.499627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.499634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.509137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.509155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.509166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.518940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.518959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.518965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.528371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.528388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.528395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.536991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.537008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.537015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.546306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.546324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.546331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.555046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.555063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.555069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.564628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.564646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.564652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.574449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.574467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.574474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.582023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.582041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.582048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.592892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.592909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.592916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.602305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.602322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.602328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.609883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.609900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.609906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.620281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.620298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.620308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.629285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.629302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.260 [2024-11-19 18:28:21.629309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.260 [2024-11-19 18:28:21.637954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.260 [2024-11-19 18:28:21.637971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.637977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.261 [2024-11-19 18:28:21.646115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.261 [2024-11-19 18:28:21.646133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.646140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.261 [2024-11-19 18:28:21.655682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.261 [2024-11-19 18:28:21.655699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.655706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.261 [2024-11-19 18:28:21.664889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.261 [2024-11-19 18:28:21.664906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.664912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.261 [2024-11-19 18:28:21.673316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.261 [2024-11-19 18:28:21.673334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.673341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.261 [2024-11-19 18:28:21.681957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.261 [2024-11-19 18:28:21.681975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.681982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.261 [2024-11-19 18:28:21.691202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.261 [2024-11-19 18:28:21.691219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.691225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.261 [2024-11-19 18:28:21.699983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.261 [2024-11-19 18:28:21.700001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.700008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.261 [2024-11-19 18:28:21.710182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.261 [2024-11-19 18:28:21.710201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.710208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.261 [2024-11-19 18:28:21.722329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.261 [2024-11-19 18:28:21.722347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.261 [2024-11-19 18:28:21.722354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.522 [2024-11-19 18:28:21.730674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.522 [2024-11-19 18:28:21.730692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.522 [2024-11-19 18:28:21.730699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.522 [2024-11-19 18:28:21.738798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.522 [2024-11-19 18:28:21.738815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.522 [2024-11-19 18:28:21.738822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.522 [2024-11-19 18:28:21.749425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.522 [2024-11-19 18:28:21.749443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.522 [2024-11-19 18:28:21.749449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.522 [2024-11-19 18:28:21.758716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.522 [2024-11-19 18:28:21.758733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.522 [2024-11-19 18:28:21.758740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.522 [2024-11-19 18:28:21.766542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.522 [2024-11-19 18:28:21.766559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.522 [2024-11-19 18:28:21.766566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.522 [2024-11-19 18:28:21.776314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.523 [2024-11-19 18:28:21.776331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.523 [2024-11-19 18:28:21.776341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.523 [2024-11-19 18:28:21.785689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.523 [2024-11-19 18:28:21.785707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.523 [2024-11-19 18:28:21.785713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.523 [2024-11-19 18:28:21.795342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.523 [2024-11-19 18:28:21.795360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.523 [2024-11-19 18:28:21.795366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.523 [2024-11-19 18:28:21.804217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.523 [2024-11-19 18:28:21.804234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.523 [2024-11-19 18:28:21.804241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.523 27989.50 IOPS, 109.33 MiB/s [2024-11-19T17:28:21.994Z] [2024-11-19 18:28:21.811099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x25165c0) 00:29:20.523 [2024-11-19 18:28:21.811117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.523 [2024-11-19 18:28:21.811124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.523 00:29:20.523 Latency(us) 00:29:20.523 [2024-11-19T17:28:21.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.523 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:20.523 nvme0n1 : 2.00 27995.45 109.36 0.00 0.00 4566.50 2129.92 13161.81 00:29:20.523 [2024-11-19T17:28:21.994Z] =================================================================================================================== 00:29:20.523 [2024-11-19T17:28:21.994Z] Total : 27995.45 109.36 0.00 0.00 4566.50 2129.92 13161.81 00:29:20.523 { 00:29:20.523 "results": [ 00:29:20.523 { 00:29:20.523 "job": "nvme0n1", 00:29:20.523 "core_mask": "0x2", 00:29:20.523 "workload": "randread", 00:29:20.523 "status": "finished", 00:29:20.523 "queue_depth": 128, 00:29:20.523 "io_size": 4096, 00:29:20.523 "runtime": 2.004147, 00:29:20.523 "iops": 27995.451431456873, 00:29:20.523 "mibps": 109.35723215412841, 00:29:20.523 "io_failed": 0, 00:29:20.523 "io_timeout": 0, 00:29:20.523 "avg_latency_us": 4566.502573535091, 00:29:20.523 "min_latency_us": 2129.92, 00:29:20.523 "max_latency_us": 13161.813333333334 00:29:20.523 } 00:29:20.523 ], 00:29:20.523 "core_count": 1 00:29:20.523 } 00:29:20.523 18:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:20.523 18:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:20.523 18:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:20.523 18:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:20.523 | .driver_specific 00:29:20.523 | .nvme_error 00:29:20.523 | .status_code 00:29:20.523 | .command_transient_transport_error' 00:29:20.783 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:29:20.783 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2162768 00:29:20.783 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2162768 ']' 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2162768 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2162768 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2162768' 00:29:20.784 killing process with pid 2162768 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2162768 00:29:20.784 Received shutdown signal, test time was about 2.000000 seconds 00:29:20.784 00:29:20.784 Latency(us) 00:29:20.784 [2024-11-19T17:28:22.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.784 [2024-11-19T17:28:22.255Z] =================================================================================================================== 00:29:20.784 [2024-11-19T17:28:22.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2162768 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2163542 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2163542 /var/tmp/bperf.sock 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2163542 ']' 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.784 18:28:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.784 [2024-11-19 18:28:22.237941] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:20.784 [2024-11-19 18:28:22.238003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163542 ] 00:29:20.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.784 Zero copy mechanism will not be used. 00:29:21.044 [2024-11-19 18:28:22.321887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.044 [2024-11-19 18:28:22.351598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.614 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.614 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:21.614 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.614 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.875 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:21.875 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.875 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:21.875 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.875 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.875 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.136 nvme0n1 00:29:22.136 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:22.136 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.136 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.136 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.136 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:22.136 18:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.397 Zero copy mechanism will not be used. 00:29:22.397 Running I/O for 2 seconds... 00:29:22.397 [2024-11-19 18:28:23.672758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.672791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.672801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.678527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.678550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.678558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.685676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.685696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.685704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.695453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.695478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.695485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.705363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.705381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.705389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.712824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.712843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.712850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.717254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.717272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.717279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.725395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.725414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.725421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.729459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.729478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.729484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.736641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.736660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.736667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.741097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.741117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.741124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.745981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.745999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.746007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.748916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.748933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.748940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.753075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.753094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.753100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.763867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.397 [2024-11-19 18:28:23.763885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.397 [2024-11-19 18:28:23.763892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.397 [2024-11-19 18:28:23.776435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.776453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.776459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.785866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.785885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.785892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.795587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.795605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.795612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.800203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.800221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.800228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.804950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.804968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.804975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.811918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.811937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.811946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.820764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.820782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.820789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.831082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.831100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.831107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.838771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.838790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.838797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.847851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.847870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.847877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.855745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.855764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.855771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.398 [2024-11-19 18:28:23.860202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.398 [2024-11-19 18:28:23.860220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.398 [2024-11-19 18:28:23.860227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.659 [2024-11-19 18:28:23.867337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.659 [2024-11-19 18:28:23.867356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.659 [2024-11-19 18:28:23.867363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.659 [2024-11-19 18:28:23.872397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.659 [2024-11-19 18:28:23.872415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.659 [2024-11-19 18:28:23.872422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.659 [2024-11-19 18:28:23.880998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.881017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.881024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.889132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.889151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.889162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.897218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.897236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.897242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.908052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.908071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.908077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.919225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.919243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.919249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.930749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.930768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.930775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.941478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.941496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.941502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.952897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.952916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.952922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.962524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.962543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.962553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.972194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.972212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.972218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.978562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.978581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.978587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.983390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.983409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.983415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.990219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.990237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.990244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:23.999247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:23.999266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:23.999272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.004873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.004892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.004898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.009675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.009694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.009700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.014123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.014141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.014147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.019959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.019980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.019987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.027986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.028004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.028010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.034270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.034288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.034294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.040713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.040731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.040737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.047956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.047974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.047980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.052395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.052413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.052420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.063531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.063549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.063555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.068932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.068949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.068956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.078173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.078192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.660 [2024-11-19 18:28:24.078198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.660 [2024-11-19 18:28:24.085269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.660 [2024-11-19 18:28:24.085287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.661 [2024-11-19 18:28:24.085294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.661 [2024-11-19 18:28:24.094773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.661 [2024-11-19 18:28:24.094792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.661 [2024-11-19 18:28:24.094798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.661 [2024-11-19 18:28:24.104643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.661 [2024-11-19 18:28:24.104661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.661 [2024-11-19 18:28:24.104667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.661 [2024-11-19 18:28:24.115272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.661 [2024-11-19 18:28:24.115291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.661 [2024-11-19 18:28:24.115297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.661 [2024-11-19 18:28:24.122522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.661 [2024-11-19 18:28:24.122541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.661 [2024-11-19 18:28:24.122547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.128931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.128949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.128956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.134021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.134039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.134045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.142466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.142485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.142492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.146974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.146992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.147001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.151360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.151379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.151385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.158488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.158506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.158513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.164912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.164930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.164936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.169411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.169429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.169435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.177200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.177218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.177224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.184518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.184537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.184543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.189858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.189877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.189884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.197905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.197923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.197929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.207573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.207592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.207598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.217525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.217543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.217550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.229929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.229948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.229954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.240321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.240339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.240346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.248567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.248586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.248592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.256912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.256930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.256937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.267410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.267428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.922 [2024-11-19 18:28:24.267435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.922 [2024-11-19 18:28:24.278505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.922 [2024-11-19 18:28:24.278523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.278529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.287466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.287484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.287494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.298053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.298072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.298078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.308717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.308735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.308741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.319336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.319355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.319361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.328922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.328940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.328947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.340608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.340627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.340633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.351964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.351983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.351990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.364645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.364663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.364670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.371644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.371662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.371669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.923 [2024-11-19 18:28:24.380393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:22.923 [2024-11-19 18:28:24.380415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.923 [2024-11-19 18:28:24.380421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.183 [2024-11-19 18:28:24.389552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.183 [2024-11-19 18:28:24.389572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.183 [2024-11-19 18:28:24.389578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.183 [2024-11-19 18:28:24.398861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.183 [2024-11-19 18:28:24.398880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.183 [2024-11-19 18:28:24.398886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.183 [2024-11-19 18:28:24.409569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.183 [2024-11-19 18:28:24.409587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.183 [2024-11-19 18:28:24.409594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.183 [2024-11-19 18:28:24.419835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.183 [2024-11-19 18:28:24.419854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.183 [2024-11-19 18:28:24.419861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.183 [2024-11-19 18:28:24.431316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.431335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.431341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.443037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.443055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.443061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.453583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.453601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.453607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.463205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.463222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.463229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.473715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.473733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.473740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.481721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.481738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.481745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.494172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.494190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.494196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.506171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.506188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.506195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.518484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.518502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.518508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.530999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.531016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.531023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.543539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.543557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.543563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.555359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.555377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.555383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.567931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.567949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.567958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.579119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.579137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.579143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.591682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.591699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.591706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.596011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.596028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.596035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.605837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.605855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.605862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.616553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.616570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.616576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.627814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.627832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.627838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.638867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.638884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.638890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.184 [2024-11-19 18:28:24.649651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.184 [2024-11-19 18:28:24.649668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.184 [2024-11-19 18:28:24.649674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.445 3574.00 IOPS, 446.75 MiB/s [2024-11-19T17:28:24.916Z] [2024-11-19 18:28:24.661078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.661095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.661102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.672197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.672215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.672221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.683330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.683348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.683354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.694535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.694552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.694559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.705564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.705582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.705588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.716166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.716183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.716190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.727794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.727811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.727818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.737294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.737311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.737318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.746697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.746716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.746728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.757187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.757206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.757212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.769162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.769180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.769187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.781898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.781917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.781923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.793793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.793811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.793817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.804343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.804361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.445 [2024-11-19 18:28:24.804368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.445 [2024-11-19 18:28:24.814479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.445 [2024-11-19 18:28:24.814497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.814504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.446 [2024-11-19 18:28:24.823775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.446 [2024-11-19 18:28:24.823794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.823800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.446 [2024-11-19 18:28:24.834188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.446 [2024-11-19 18:28:24.834206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.834213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.446 [2024-11-19 18:28:24.843007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.446 [2024-11-19 18:28:24.843028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.843034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.446 [2024-11-19 18:28:24.853293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.446 [2024-11-19 18:28:24.853312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.853319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.446 [2024-11-19 18:28:24.865034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.446 [2024-11-19 18:28:24.865051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.865057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.446 [2024-11-19 18:28:24.874034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.446 [2024-11-19 18:28:24.874052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.874058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.446 [2024-11-19 18:28:24.884414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.446 [2024-11-19 18:28:24.884432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.884438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.446 [2024-11-19 18:28:24.895331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.446 [2024-11-19 18:28:24.895349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.895356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.446 [2024-11-19 18:28:24.905354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.446 [2024-11-19 18:28:24.905371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.446 [2024-11-19 18:28:24.905377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.707 [2024-11-19 18:28:24.914186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.707 [2024-11-19 18:28:24.914205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.707 [2024-11-19 18:28:24.914212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.707 [2024-11-19 18:28:24.924458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.707 [2024-11-19 18:28:24.924476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.707 [2024-11-19 18:28:24.924482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.707 [2024-11-19 18:28:24.934771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.707 [2024-11-19 18:28:24.934790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.707 [2024-11-19 18:28:24.934796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.707 [2024-11-19 18:28:24.945033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.707 [2024-11-19 18:28:24.945051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.707 [2024-11-19 18:28:24.945058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.707 [2024-11-19 18:28:24.953832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.707 [2024-11-19 18:28:24.953851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.707 [2024-11-19 18:28:24.953857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.707 [2024-11-19 18:28:24.964218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.707 [2024-11-19 18:28:24.964236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.707 [2024-11-19 18:28:24.964243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.707 [2024-11-19 18:28:24.975375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.707 [2024-11-19 18:28:24.975394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.707 [2024-11-19 18:28:24.975401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:24.985966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:24.985984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:24.985991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:24.996995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:24.997013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:24.997020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.006931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.006949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.006956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.017892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.017911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.017922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.028936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.028955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.028962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.038497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.038516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.038523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.049978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.049997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.050004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.061295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.061314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.061320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.069732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.069752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.069758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.079735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.079754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.079761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.091891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.091909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.091916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.103647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.103666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.103672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.114350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.114369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.114375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.123899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.123918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.123924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.135343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.135362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.135368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.145449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.145468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.145474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.154927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.154945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.154951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.708 [2024-11-19 18:28:25.164706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.708 [2024-11-19 18:28:25.164725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.708 [2024-11-19 18:28:25.164732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.969 [2024-11-19 18:28:25.176076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.969 [2024-11-19 18:28:25.176095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.969 [2024-11-19 18:28:25.176101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.969 [2024-11-19 18:28:25.187064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.969 [2024-11-19 18:28:25.187083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.969 [2024-11-19 18:28:25.187089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.969 [2024-11-19 18:28:25.198200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.969 [2024-11-19 18:28:25.198219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.969 [2024-11-19 18:28:25.198228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.969 [2024-11-19 18:28:25.207140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.969 [2024-11-19 18:28:25.207164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.969 [2024-11-19 18:28:25.207170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.969 [2024-11-19 18:28:25.216133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.969 [2024-11-19 18:28:25.216153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.969 [2024-11-19 18:28:25.216164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.969 [2024-11-19 18:28:25.226394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.969 [2024-11-19 18:28:25.226412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.969 [2024-11-19 18:28:25.226419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.237224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.237242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.237249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.247378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.247397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.247403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.257446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.257464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.257471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.267201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.267220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.267226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.277105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.277123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.277129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.285189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.285210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.285217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.294096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.294115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.294121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.302486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.302505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.302512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.312207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.312226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.312232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.320869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.320888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.320895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.331197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.331216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.331222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.339836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.339855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.339861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.351238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.351257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.351264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.361298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.361316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.361323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.370877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.370896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.370902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.380444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.380462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.380469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.391602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.391620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.391627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.401685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.401703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.401709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.413168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.413187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.413194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.423807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.423826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.423832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:23.970 [2024-11-19 18:28:25.433873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:23.970 [2024-11-19 18:28:25.433891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.970 [2024-11-19 18:28:25.433898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.231 [2024-11-19 18:28:25.444453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.231 [2024-11-19 18:28:25.444471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.231 [2024-11-19 18:28:25.444477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.231 [2024-11-19 18:28:25.455164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.231 [2024-11-19 18:28:25.455183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.231 [2024-11-19 18:28:25.455193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.231 [2024-11-19 18:28:25.465921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.231 [2024-11-19 18:28:25.465940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.231 [2024-11-19 18:28:25.465947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:24.231 [2024-11-19 18:28:25.477300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.231 [2024-11-19 18:28:25.477318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.231 [2024-11-19 18:28:25.477325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.231 [2024-11-19 18:28:25.488504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.231 [2024-11-19 18:28:25.488522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.488529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.500738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.500757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.500764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.512859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.512877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.512884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.524743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.524762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.524768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.537153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.537176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.537183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.549650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.549669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.549675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.562316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.562337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.562344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.574617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.574636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.574642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.587162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.587181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.587187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.599405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.599424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.599430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.612105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.612125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.612131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.624689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.624709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.624718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.637069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.637088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.637094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.232 [2024-11-19 18:28:25.648683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.648702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.648709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.232 3249.00 IOPS, 406.12 MiB/s [2024-11-19T17:28:25.703Z] [2024-11-19 18:28:25.661796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1783a10) 00:29:24.232 [2024-11-19 18:28:25.661815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.232 [2024-11-19 18:28:25.661825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:24.232 00:29:24.232 Latency(us) 00:29:24.232 [2024-11-19T17:28:25.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.232 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:24.232 nvme0n1 : 2.00 3249.74 406.22 0.00 0.00 4919.05 819.20 14308.69 00:29:24.232 [2024-11-19T17:28:25.703Z] =================================================================================================================== 00:29:24.232 [2024-11-19T17:28:25.703Z] Total : 3249.74 406.22 0.00 0.00 4919.05 819.20 14308.69 00:29:24.232 { 00:29:24.232 "results": [ 00:29:24.232 { 00:29:24.232 "job": "nvme0n1", 00:29:24.232 "core_mask": "0x2", 00:29:24.232 "workload": "randread", 00:29:24.232 "status": "finished", 00:29:24.232 "queue_depth": 16, 00:29:24.232 "io_size": 131072, 00:29:24.232 "runtime": 2.004466, 00:29:24.232 "iops": 3249.743323159385, 00:29:24.232 "mibps": 406.2179153949231, 00:29:24.232 "io_failed": 0, 00:29:24.232 "io_timeout": 0, 00:29:24.232 "avg_latency_us": 4919.054278988844, 00:29:24.232 "min_latency_us": 819.2, 00:29:24.232 "max_latency_us": 14308.693333333333 00:29:24.232 } 00:29:24.232 ], 00:29:24.232 "core_count": 1 00:29:24.232 } 00:29:24.232 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:24.232 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:24.232 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:24.232 | .driver_specific 00:29:24.232 | .nvme_error 00:29:24.232 | .status_code 00:29:24.232 | .command_transient_transport_error' 00:29:24.232 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2163542 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2163542 ']' 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2163542 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2163542 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2163542' 00:29:24.494 killing process with pid 2163542 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2163542 00:29:24.494 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.494 00:29:24.494 Latency(us) 00:29:24.494 [2024-11-19T17:28:25.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.494 [2024-11-19T17:28:25.965Z] =================================================================================================================== 00:29:24.494 [2024-11-19T17:28:25.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.494 18:28:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2163542 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2164260 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2164260 /var/tmp/bperf.sock 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2164260 ']' 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.756 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.756 [2024-11-19 18:28:26.100939] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:24.756 [2024-11-19 18:28:26.100997] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164260 ] 00:29:24.756 [2024-11-19 18:28:26.184457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.756 [2024-11-19 18:28:26.213832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.698 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.698 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:25.698 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.698 18:28:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.698 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:25.698 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.698 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.698 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.698 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.698 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.270 nvme0n1 00:29:26.270 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:26.270 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.270 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.270 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.270 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:26.270 18:28:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.270 Running I/O for 2 seconds... 00:29:26.270 [2024-11-19 18:28:27.625022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f4b08 00:29:26.270 [2024-11-19 18:28:27.626148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.626178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.634388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166dece0 00:29:26.270 [2024-11-19 18:28:27.635611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.635628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.642183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fb480 00:29:26.270 [2024-11-19 18:28:27.643287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.643302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.650944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fa7d8 00:29:26.270 [2024-11-19 18:28:27.652050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.652066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.659414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166ebb98 00:29:26.270 [2024-11-19 18:28:27.660520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.660536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.667341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166eb760 00:29:26.270 [2024-11-19 18:28:27.668349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.668365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.676053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e6fa8 00:29:26.270 [2024-11-19 18:28:27.677048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.677064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.684680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166df550 00:29:26.270 [2024-11-19 18:28:27.685689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.685709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.693125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e0630 00:29:26.270 [2024-11-19 18:28:27.694136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.694153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.701562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e1710 00:29:26.270 [2024-11-19 18:28:27.702559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.702575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.710000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166de038 00:29:26.270 [2024-11-19 18:28:27.710992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.711009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.718449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e3060 00:29:26.270 [2024-11-19 18:28:27.719445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.719462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.726898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f8a50 00:29:26.270 [2024-11-19 18:28:27.727869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.727885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.270 [2024-11-19 18:28:27.735330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f9b30 00:29:26.270 [2024-11-19 18:28:27.736335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.270 [2024-11-19 18:28:27.736351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.532 [2024-11-19 18:28:27.743759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fac10 00:29:26.532 [2024-11-19 18:28:27.744752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.744767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.752181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fbcf0 00:29:26.533 [2024-11-19 18:28:27.753191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.753207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.760609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166feb58 00:29:26.533 [2024-11-19 18:28:27.761616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.761635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.769047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fe2e8 00:29:26.533 [2024-11-19 18:28:27.770062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.770079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.777504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166ec408 00:29:26.533 [2024-11-19 18:28:27.778501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.778518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.785929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e4578 00:29:26.533 [2024-11-19 18:28:27.786934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.786950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.794364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e5658 00:29:26.533 [2024-11-19 18:28:27.795327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.795343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.802776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e6738 00:29:26.533 [2024-11-19 18:28:27.803776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.803791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.811206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166ed4e8 00:29:26.533 [2024-11-19 18:28:27.812199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.812215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.819628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166df118 00:29:26.533 [2024-11-19 18:28:27.820641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.820657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.828049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e01f8 00:29:26.533 [2024-11-19 18:28:27.829059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.829076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.836483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e12d8 00:29:26.533 [2024-11-19 18:28:27.837499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.837515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.844896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e23b8 00:29:26.533 [2024-11-19 18:28:27.845886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.845902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.853336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e2c28 00:29:26.533 [2024-11-19 18:28:27.854344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.854360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.861761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e3d08 00:29:26.533 [2024-11-19 18:28:27.862769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.862784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.870617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f4298 00:29:26.533 [2024-11-19 18:28:27.871397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.871413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.880334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f1ca0 00:29:26.533 [2024-11-19 18:28:27.881887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.881902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.886409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f0350 00:29:26.533 [2024-11-19 18:28:27.887116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.887132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.895101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fb8b8 00:29:26.533 [2024-11-19 18:28:27.895798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.895814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.903541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e3d08 00:29:26.533 [2024-11-19 18:28:27.904258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.904274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.911998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f0350 00:29:26.533 [2024-11-19 18:28:27.912693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.912710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.920429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fb8b8 00:29:26.533 [2024-11-19 18:28:27.921112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.921128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.928844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e3d08 00:29:26.533 [2024-11-19 18:28:27.929561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.929577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.937271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f0350 00:29:26.533 [2024-11-19 18:28:27.937930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.937946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.945710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fb8b8 00:29:26.533 [2024-11-19 18:28:27.946380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.946396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.954435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fef90 00:29:26.533 [2024-11-19 18:28:27.955325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.955341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.533 [2024-11-19 18:28:27.962916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e23b8 00:29:26.533 [2024-11-19 18:28:27.963806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.533 [2024-11-19 18:28:27.963822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.534 [2024-11-19 18:28:27.971385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f8a50 00:29:26.534 [2024-11-19 18:28:27.972238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.534 [2024-11-19 18:28:27.972254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.534 [2024-11-19 18:28:27.979851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f7970 00:29:26.534 [2024-11-19 18:28:27.980724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.534 [2024-11-19 18:28:27.980742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.534 [2024-11-19 18:28:27.988322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166ebfd0 00:29:26.534 [2024-11-19 18:28:27.989152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.534 [2024-11-19 18:28:27.989170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.534 [2024-11-19 18:28:27.996810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fd640 00:29:26.534 [2024-11-19 18:28:27.997683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.534 [2024-11-19 18:28:27.997699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.795 [2024-11-19 18:28:28.005273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166df550 00:29:26.795 [2024-11-19 18:28:28.006142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.795 [2024-11-19 18:28:28.006162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.795 [2024-11-19 18:28:28.013758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fef90 00:29:26.795 [2024-11-19 18:28:28.014629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.795 [2024-11-19 18:28:28.014647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.795 [2024-11-19 18:28:28.022205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e23b8 00:29:26.795 [2024-11-19 18:28:28.023053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.795 [2024-11-19 18:28:28.023069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.795 [2024-11-19 18:28:28.030665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f8a50 00:29:26.795 [2024-11-19 18:28:28.031534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.795 [2024-11-19 18:28:28.031550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.795 [2024-11-19 18:28:28.039142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f7970 00:29:26.795 [2024-11-19 18:28:28.040018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.795 [2024-11-19 18:28:28.040034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.795 [2024-11-19 18:28:28.047629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166ebfd0 00:29:26.795 [2024-11-19 18:28:28.048511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.795 [2024-11-19 18:28:28.048527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.795 [2024-11-19 18:28:28.056106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fd640 00:29:26.795 [2024-11-19 18:28:28.056951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.795 [2024-11-19 18:28:28.056967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:26.795 [2024-11-19 18:28:28.064512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f0bc0 00:29:26.795 [2024-11-19 18:28:28.065332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.065348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.073075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f7100 00:29:26.796 [2024-11-19 18:28:28.073940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.073956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.081546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e1710 00:29:26.796 [2024-11-19 18:28:28.082427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.082443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.090008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f0bc0 00:29:26.796 [2024-11-19 18:28:28.090882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.090898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.098610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e7c50 00:29:26.796 [2024-11-19 18:28:28.099428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.099444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.107024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f9b30 00:29:26.796 [2024-11-19 18:28:28.107894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.107910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.115441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fbcf0 00:29:26.796 [2024-11-19 18:28:28.116296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.116312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.123903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f0bc0 00:29:26.796 [2024-11-19 18:28:28.124773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.124789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.132328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f7100 00:29:26.796 [2024-11-19 18:28:28.133187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.133202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.140765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166ef270 00:29:26.796 [2024-11-19 18:28:28.141618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.141633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.149185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e5ec8 00:29:26.796 [2024-11-19 18:28:28.150054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.150070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.157608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f2510 00:29:26.796 [2024-11-19 18:28:28.158458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.158474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.166015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166e7c50 00:29:26.796 [2024-11-19 18:28:28.166870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.166885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.174456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f9b30 00:29:26.796 [2024-11-19 18:28:28.175311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.175327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.182890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166fbcf0 00:29:26.796 [2024-11-19 18:28:28.183764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.183780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.191338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f0bc0 00:29:26.796 [2024-11-19 18:28:28.192066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.192081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.200011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f0788 00:29:26.796 [2024-11-19 18:28:28.200999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.201018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.208710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166ef270 00:29:26.796 [2024-11-19 18:28:28.209526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.209542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.217856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f8a50 00:29:26.796 [2024-11-19 18:28:28.219090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.219105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.225231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:26.796 [2024-11-19 18:28:28.225527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.225543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.233946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:26.796 [2024-11-19 18:28:28.234206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.234221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.242717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:26.796 [2024-11-19 18:28:28.242978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.242992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:26.796 [2024-11-19 18:28:28.251468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:26.796 [2024-11-19 18:28:28.251608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.796 [2024-11-19 18:28:28.251623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:26.797 [2024-11-19 18:28:28.260221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:26.797 [2024-11-19 18:28:28.260479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.797 [2024-11-19 18:28:28.260495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.268954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.269241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.269256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.277700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.277983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.277999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.286437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.286708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.286724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.295131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.295389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.295404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.303860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.304129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.304145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.312684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.312922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.312937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.321469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.321689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.321704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.330173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.330453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.330469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.338888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.339104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.339120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.058 [2024-11-19 18:28:28.347589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.058 [2024-11-19 18:28:28.347821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.058 [2024-11-19 18:28:28.347836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.356315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.356534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.356549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.365025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.365298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.365313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.373757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.373967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.373982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.382510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.382789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.382804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.391241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.391471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.391486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.399932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.400167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.400182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.408653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.408878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.408893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.417434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.417700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.417715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.426127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.426400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.426419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.434837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.435071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.435086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.443525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.443788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.443803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.452226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.452460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.452475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.460962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.461192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.461207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.469796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.470083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.470099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.478542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.478808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.478823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.487283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.487524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.487539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.496004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.496124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.496138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.504740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.504948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.504963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.513488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.513734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.513750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.059 [2024-11-19 18:28:28.522148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.059 [2024-11-19 18:28:28.522376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.059 [2024-11-19 18:28:28.522391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.530912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.531151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.531169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.539621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.539932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.539947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.548344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.548588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.548602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.557001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.557276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.557291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.565737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.566042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.566058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.574424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.574694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.574708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.583156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.583420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.583436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.591843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.592079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.592095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.600593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.600843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.600858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.609321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.609597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.609612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 29580.00 IOPS, 115.55 MiB/s [2024-11-19T17:28:28.792Z] [2024-11-19 18:28:28.617989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.618277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.618300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.626689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.626966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.626981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.635480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.321 [2024-11-19 18:28:28.635764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.321 [2024-11-19 18:28:28.635780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.321 [2024-11-19 18:28:28.644216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.644501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.644517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.652919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.653205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.653223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.661618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.661865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.661880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.670400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.670658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.670673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.679110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.679371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.679386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.687839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.688106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.688121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.696512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.696636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.696651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.705325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.705589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.705604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.714099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.714360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.714376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.722787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.723062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.723078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.731506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.731793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.731809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.740223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.740484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.740499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.748935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.749184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.749202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.757641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.757924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.757940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.766358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.766661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.766683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.775055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.775195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.775210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.322 [2024-11-19 18:28:28.783741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.322 [2024-11-19 18:28:28.784017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.322 [2024-11-19 18:28:28.784033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.582 [2024-11-19 18:28:28.792463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.582 [2024-11-19 18:28:28.792775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.582 [2024-11-19 18:28:28.792791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.582 [2024-11-19 18:28:28.801151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.582 [2024-11-19 18:28:28.801499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.582 [2024-11-19 18:28:28.801515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.582 [2024-11-19 18:28:28.809971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.582 [2024-11-19 18:28:28.810325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.582 [2024-11-19 18:28:28.810341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.582 [2024-11-19 18:28:28.818626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.582 [2024-11-19 18:28:28.818920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.818936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.827360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.827620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.827636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.836067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.836385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.836401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.844740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.845018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.845033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.853540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.853806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.853821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.862226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.862475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.862491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.871077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.871378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.871393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.879822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.880067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.880085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.888526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.888825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.888840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.897306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.897565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.897579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.906007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.906267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.906282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.914708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.915101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.915117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.923413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.923565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.923580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.932170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.932404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.932420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.940902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.941167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.941189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.949586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.949812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.949827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.958266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.958517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.958539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.966937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.967220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.967236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.975665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.975942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.975958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.984373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.984641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.984657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:28.993131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:28.993405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:28.993421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:29.001790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:29.002064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:29.002078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:29.010546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:29.010840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:29.010856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:29.019248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:29.019479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:29.019493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:29.027935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:29.028204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:29.028220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:29.036637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:29.036912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:29.036927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.583 [2024-11-19 18:28:29.045355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.583 [2024-11-19 18:28:29.045614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.583 [2024-11-19 18:28:29.045629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.054054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.054329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.054344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.062840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.063074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.063088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.071579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.071838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.071853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.080317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.080576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.080592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.089083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.089358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.089373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.097789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.098058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.098074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.106473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.106749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.106769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.115228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.115487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.115501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.123955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.124208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.124223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.132624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.132856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.132872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.141310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.141587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.141603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.150016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.150271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.150286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.158691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.158953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.158968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.167408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.167684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.167699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.176210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.176447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.176462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.184867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.185142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.185162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.193688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.193949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.193964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.202335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.202603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.202619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.211103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.211373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.211389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.219802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.220053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.220068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.228524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.228782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.228797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.845 [2024-11-19 18:28:29.237188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.845 [2024-11-19 18:28:29.237456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.845 [2024-11-19 18:28:29.237471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.846 [2024-11-19 18:28:29.245951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.846 [2024-11-19 18:28:29.246260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.846 [2024-11-19 18:28:29.246275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.846 [2024-11-19 18:28:29.254662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.846 [2024-11-19 18:28:29.254910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.846 [2024-11-19 18:28:29.254926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.846 [2024-11-19 18:28:29.263376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.846 [2024-11-19 18:28:29.263633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.846 [2024-11-19 18:28:29.263648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.846 [2024-11-19 18:28:29.272146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.846 [2024-11-19 18:28:29.272395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.846 [2024-11-19 18:28:29.272410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.846 [2024-11-19 18:28:29.280873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.846 [2024-11-19 18:28:29.281193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.846 [2024-11-19 18:28:29.281209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.846 [2024-11-19 18:28:29.289575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.846 [2024-11-19 18:28:29.289805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.846 [2024-11-19 18:28:29.289820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.846 [2024-11-19 18:28:29.298294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.846 [2024-11-19 18:28:29.298504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.846 [2024-11-19 18:28:29.298519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:27.846 [2024-11-19 18:28:29.307022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:27.846 [2024-11-19 18:28:29.307280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:27.846 [2024-11-19 18:28:29.307295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.108 [2024-11-19 18:28:29.315769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.108 [2024-11-19 18:28:29.316006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.108 [2024-11-19 18:28:29.316027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.108 [2024-11-19 18:28:29.324448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.108 [2024-11-19 18:28:29.324704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.108 [2024-11-19 18:28:29.324719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.108 [2024-11-19 18:28:29.333145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.108 [2024-11-19 18:28:29.333408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.108 [2024-11-19 18:28:29.333426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.108 [2024-11-19 18:28:29.341822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.108 [2024-11-19 18:28:29.342077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.108 [2024-11-19 18:28:29.342092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.108 [2024-11-19 18:28:29.350563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.350842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.350856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.359239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.359485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.359500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.368018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.368283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.368298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.376698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.376971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.376987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.385503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.385709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.385724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.394185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.394444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.394459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.402869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.403162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.403178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.411551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.411684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.411701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.420271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.420435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.420449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.428987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.429246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.429262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.437664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.437952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.437968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.446354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.446629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.446644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.455061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.455327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.455342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.463779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.464013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.464028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.472602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.472892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.472907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.481272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.481542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.481557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.489971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.490198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.490213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.498720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.498960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.498975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.507462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.507689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.507704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.516148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.516430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.516445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.524858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.525081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.525095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.533585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.533818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.533833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.542367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.542598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.542613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.551096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.551461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.551477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.559799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.560028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.560044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.109 [2024-11-19 18:28:29.568493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.109 [2024-11-19 18:28:29.568718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.109 [2024-11-19 18:28:29.568733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.371 [2024-11-19 18:28:29.577185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.371 [2024-11-19 18:28:29.577453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.371 [2024-11-19 18:28:29.577469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.371 [2024-11-19 18:28:29.585915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.371 [2024-11-19 18:28:29.586188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.371 [2024-11-19 18:28:29.586203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.371 [2024-11-19 18:28:29.594639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.371 [2024-11-19 18:28:29.594914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.371 [2024-11-19 18:28:29.594929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.371 [2024-11-19 18:28:29.603347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.371 [2024-11-19 18:28:29.603598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.371 [2024-11-19 18:28:29.603613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.371 [2024-11-19 18:28:29.612063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1520) with pdu=0x2000166f5be8 00:29:28.371 [2024-11-19 18:28:29.612359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.371 [2024-11-19 18:28:29.612375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:28.371 29448.50 IOPS, 115.03 MiB/s 00:29:28.371 Latency(us) 00:29:28.371 [2024-11-19T17:28:29.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.371 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.371 nvme0n1 : 2.01 29450.60 115.04 0.00 0.00 4339.06 2075.31 14745.60 00:29:28.371 [2024-11-19T17:28:29.842Z] =================================================================================================================== 00:29:28.371 [2024-11-19T17:28:29.842Z] Total : 29450.60 115.04 0.00 0.00 4339.06 2075.31 14745.60 00:29:28.371 { 00:29:28.371 "results": [ 00:29:28.371 { 00:29:28.371 "job": "nvme0n1", 00:29:28.371 "core_mask": "0x2", 00:29:28.371 "workload": "randwrite", 00:29:28.371 "status": "finished", 00:29:28.371 "queue_depth": 128, 00:29:28.371 "io_size": 4096, 00:29:28.371 "runtime": 2.005562, 00:29:28.371 "iops": 29450.597887275486, 00:29:28.371 "mibps": 115.04139799716987, 00:29:28.371 "io_failed": 0, 00:29:28.371 "io_timeout": 0, 00:29:28.371 "avg_latency_us": 4339.060886368125, 00:29:28.371 "min_latency_us": 2075.306666666667, 00:29:28.371 "max_latency_us": 14745.6 00:29:28.371 } 00:29:28.371 ], 00:29:28.371 "core_count": 1 00:29:28.371 } 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:28.371 | .driver_specific 00:29:28.371 | .nvme_error 00:29:28.371 | .status_code 00:29:28.371 | .command_transient_transport_error' 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 231 > 0 )) 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2164260 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2164260 ']' 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2164260 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.371 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2164260 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2164260' 00:29:28.631 killing process with pid 2164260 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2164260 00:29:28.631 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.631 00:29:28.631 Latency(us) 00:29:28.631 [2024-11-19T17:28:30.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.631 [2024-11-19T17:28:30.102Z] =================================================================================================================== 00:29:28.631 [2024-11-19T17:28:30.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2164260 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2164950 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2164950 /var/tmp/bperf.sock 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2164950 ']' 00:29:28.631 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:28.632 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.632 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.632 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.632 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.632 18:28:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.632 [2024-11-19 18:28:30.033016] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:28.632 [2024-11-19 18:28:30.033073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164950 ] 00:29:28.632 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:28.632 Zero copy mechanism will not be used. 00:29:28.893 [2024-11-19 18:28:30.117221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.893 [2024-11-19 18:28:30.147482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.463 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:29.463 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:29.463 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:29.463 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:29.725 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:29.725 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.725 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.725 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.725 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.725 18:28:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.986 nvme0n1 00:29:29.986 18:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:29.986 18:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.986 18:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.986 18:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.986 18:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:29.986 18:28:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.986 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.986 Zero copy mechanism will not be used. 00:29:29.986 Running I/O for 2 seconds... 00:29:29.986 [2024-11-19 18:28:31.359672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.359736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.359760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.365003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.365121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.365139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.370202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.370291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.370308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.375283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.375367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.375382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.380586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.380661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.380677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.385070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.385143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.385163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.389522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.389601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.389617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.393744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.393822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.393837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.398112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.398230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.398246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.402175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.402249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.402264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.406029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.406129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.406144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.409868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.409952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.409968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.413733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.986 [2024-11-19 18:28:31.413826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.986 [2024-11-19 18:28:31.413842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.986 [2024-11-19 18:28:31.417471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.417544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.417559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.421006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.421084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.421099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.424788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.424849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.424864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.428228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.428303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.428318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.431996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.432082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.432097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.435704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.435763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.435781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.439105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.439167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.439182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.442284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.442335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.442350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.445297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.445366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.445382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.448606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.448659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.448674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.987 [2024-11-19 18:28:31.451607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:29.987 [2024-11-19 18:28:31.451692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.987 [2024-11-19 18:28:31.451707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.455043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.455121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.455136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.458301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.458352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.458368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.461557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.461626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.461641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.464749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.464828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.464843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.468265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.468310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.468325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.471805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.471851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.471866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.475466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.475636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.475651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.482048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.482143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.482162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.487865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.487910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.487926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.491543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.491585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.491600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.250 [2024-11-19 18:28:31.496328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.250 [2024-11-19 18:28:31.496398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.250 [2024-11-19 18:28:31.496413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.499899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.499993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.500008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.503741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.503809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.503824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.507379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.507458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.507474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.510772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.510852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.510867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.514293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.514380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.514395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.518031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.518087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.518102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.521732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.521786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.521801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.525205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.525262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.525277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.531489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.531563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.531578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.534941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.535018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.535036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.538444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.538511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.538526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.541920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.542004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.542020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.545703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.545766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.545782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.548924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.548996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.549011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.552109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.552178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.552193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.555420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.555483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.555498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.558934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.559024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.559039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.562330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.562396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.562411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.565920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.566007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.566023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.569537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.569603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.569618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.572813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.572863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.572878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.575821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.575879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.575894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.579101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.579155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.579176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.582527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.582589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.582604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.585542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.585606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.585621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.588706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.588777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.588792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.251 [2024-11-19 18:28:31.591783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.251 [2024-11-19 18:28:31.591825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.251 [2024-11-19 18:28:31.591840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.595007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.595059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.595074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.600236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.600461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.600476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.604451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.604642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.604657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.607524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.607718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.607733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.610713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.610903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.610919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.615075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.615269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.615284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.618324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.618686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.618702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.621539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.621757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.621772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.624769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.624955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.624974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.627701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.627896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.627912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.630958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.631330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.631346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.634489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.634673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.634689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.637792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.638019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.638034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.641885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.642074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.642089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.648273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.648437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.648452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.651765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.651936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.651951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.655830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.656029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.656044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.660028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.660285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.660300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.663912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.664095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.664110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.667097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.667269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.667284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.670169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.670352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.670367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.673320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.673486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.673501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.676610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.676785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.676800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.680006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.680175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.680190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.683366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.683537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.683552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.686601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.686803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.252 [2024-11-19 18:28:31.686817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.252 [2024-11-19 18:28:31.689871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.252 [2024-11-19 18:28:31.690041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.253 [2024-11-19 18:28:31.690055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.253 [2024-11-19 18:28:31.693058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.253 [2024-11-19 18:28:31.693310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.253 [2024-11-19 18:28:31.693325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.253 [2024-11-19 18:28:31.696419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.253 [2024-11-19 18:28:31.696591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.253 [2024-11-19 18:28:31.696606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.253 [2024-11-19 18:28:31.699727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.253 [2024-11-19 18:28:31.699898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.253 [2024-11-19 18:28:31.699913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.253 [2024-11-19 18:28:31.702952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.253 [2024-11-19 18:28:31.703121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.253 [2024-11-19 18:28:31.703136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.253 [2024-11-19 18:28:31.706048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.253 [2024-11-19 18:28:31.706222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.253 [2024-11-19 18:28:31.706237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.253 [2024-11-19 18:28:31.709279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.253 [2024-11-19 18:28:31.709449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.253 [2024-11-19 18:28:31.709463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.253 [2024-11-19 18:28:31.712538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.253 [2024-11-19 18:28:31.712708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.253 [2024-11-19 18:28:31.712723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.253 [2024-11-19 18:28:31.715731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.253 [2024-11-19 18:28:31.715899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.253 [2024-11-19 18:28:31.715917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.718970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.719139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.719154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.722126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.722309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.722324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.725906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.726121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.726136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.730751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.730968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.730983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.735749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.735952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.735967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.741937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.742253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.742269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.748930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.749119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.749134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.752719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.752881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.752897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.756087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.756296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.756312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.759824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.759992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.760007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.763107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.763282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.763297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.766396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.766563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.766578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.769512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.769862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.769877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.772689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.772856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.772871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.775705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.775927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.775943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.779063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.515 [2024-11-19 18:28:31.779320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.515 [2024-11-19 18:28:31.779335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.515 [2024-11-19 18:28:31.782226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.782482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.782497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.785435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.785652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.785667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.788744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.788964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.788979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.793804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.794023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.794038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.797967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.798175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.798191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.801165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.801352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.801366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.804353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.804533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.804547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.807602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.807786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.807801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.810775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.810954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.810970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.813843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.814078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.814096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.816901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.817061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.817076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.820339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.820515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.820530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.822871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.822988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.823003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.825320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.825462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.825477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.827730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.827862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.827877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.830156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.830305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.830320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.832632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.832780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.832795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.835139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.835265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.835280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.837532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.837701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.837716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.840346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.840513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.840528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.844466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.844660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.844675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.849546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.849743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.849758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.854615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.854815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.854830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.864099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.864242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.864257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.872232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.872423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.872438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.879010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.879211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.879226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.886724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.516 [2024-11-19 18:28:31.886979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.516 [2024-11-19 18:28:31.886995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.516 [2024-11-19 18:28:31.897021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.897294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.897309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.907208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.907519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.907535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.916484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.916664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.916679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.925263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.925457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.925472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.933947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.934061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.934076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.942244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.942453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.942469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.946514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.946695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.946710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.950342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.950517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.950532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.954043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.954224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.954242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.957804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.957961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.957976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.961527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.961708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.961723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.964862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.965007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.965021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.969674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.969738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.969754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.973615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.973794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.973809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.976806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.976959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.976974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.517 [2024-11-19 18:28:31.979847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.517 [2024-11-19 18:28:31.979979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.517 [2024-11-19 18:28:31.979993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:31.982877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:31.983011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:31.983025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:31.985844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:31.985989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:31.986004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:31.988755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:31.988900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:31.988914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:31.992001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:31.992152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:31.992173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:31.995265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:31.995401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:31.995416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:31.997903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:31.998039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:31.998055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:32.000554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:32.000688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:32.000703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:32.003105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:32.003245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:32.003261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:32.005960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:32.006098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:32.006113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.779 [2024-11-19 18:28:32.009120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.779 [2024-11-19 18:28:32.009255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.779 [2024-11-19 18:28:32.009271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.012572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.012708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.012724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.015326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.015459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.015475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.018890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.019081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.019097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.023528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.023740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.023756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.028518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.028667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.028683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.032792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.032968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.032984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.039746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.039943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.039958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.043604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.043798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.043813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.046711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.046841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.046859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.050529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.050708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.050723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.054048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.054209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.054224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.057124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.057266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.057281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.060285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.060460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.060475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.063694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.063824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.063840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.067550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.067680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.067695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.074903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.075075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.075090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.080103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.080166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.080181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.084323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.084409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.084424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.089908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.090151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.090172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.096462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.096690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.096706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.100657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.100735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.100749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.104985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.105026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.105041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.110178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.110318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.110333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.113862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.113931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.113945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.117264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.117344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.117358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.120190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.120260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.120275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.780 [2024-11-19 18:28:32.123034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.780 [2024-11-19 18:28:32.123130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.780 [2024-11-19 18:28:32.123146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.125998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.126096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.126111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.128868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.128946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.128960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.131896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.131989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.132004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.134672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.134750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.134765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.138297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.138364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.138379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.143356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.143436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.143451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.148319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.148398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.148413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.153287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.153346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.153363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.158344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.158437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.158452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.163313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.163419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.163435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.168429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.168619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.168633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.173441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.173541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.173555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.178442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.178501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.178516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.183494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.183580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.183595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.188466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.188655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.188669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.193533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.193611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.193626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.198515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.198693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.198709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.203592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.203777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.203792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.208547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.208732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.208746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.213594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.213684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.213699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.218568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.218640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.218656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.225381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.225690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.225706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.230584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.230652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.230667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.235428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.235545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.235560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.241693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.241796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.241811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.781 [2024-11-19 18:28:32.245409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:30.781 [2024-11-19 18:28:32.245503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.781 [2024-11-19 18:28:32.245517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.248994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.249048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.249063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.252288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.252360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.252375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.255244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.255327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.255342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.258076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.258138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.258152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.260972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.261065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.261080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.263940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.264049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.264064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.266808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.266899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.266914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.269453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.269512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.269530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.271935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.271996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.272011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.274481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.274532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.274547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.277201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.277243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.277258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.279741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.279805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.279820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.282263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.282309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.282324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.284790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.284846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.284861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.287206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.287266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.287281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.289700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.289777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.289792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.292686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.292744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.292758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.295783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.295836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.295851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.298747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.298812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.298827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.301297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.301357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.301372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.303850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.303931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.303946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.306430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.306497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.306512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.044 [2024-11-19 18:28:32.308984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.044 [2024-11-19 18:28:32.309035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.044 [2024-11-19 18:28:32.309050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.311512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.311567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.311582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.314179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.314235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.314250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.316796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.316849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.316864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.319500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.319551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.319566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.322118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.322176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.322191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.324736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.324798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.324813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.327446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.327503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.327518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.330086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.330136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.330151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.332743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.332793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.332807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.335256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.335302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.335317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.337821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.337877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.337895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.340875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.340946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.340960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.344739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.344808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.344823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.349814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.350013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.350028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.045 7912.00 IOPS, 989.00 MiB/s [2024-11-19T17:28:32.516Z] [2024-11-19 18:28:32.356724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.357018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.357034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.367201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.367542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.367557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.377053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.377215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.377230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.385900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.386197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.386213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.394706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.394918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.394933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.403737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.404047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.404063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.413271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.413541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.413556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.422353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.422678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.422694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.430790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.431120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.431135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.440067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.440180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.440196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.448483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.448587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.448602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.453278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.453327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.453342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.045 [2024-11-19 18:28:32.457164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.045 [2024-11-19 18:28:32.457267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.045 [2024-11-19 18:28:32.457282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.461028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.461153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.461174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.465250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.465321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.465336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.469174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.469267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.469282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.473251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.473359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.473373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.477094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.477172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.477188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.480772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.480857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.480872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.484423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.484485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.484500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.487877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.487921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.487936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.492050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.492148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.492168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.497666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.497905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.497922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.502951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.502996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.503011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.046 [2024-11-19 18:28:32.508221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.046 [2024-11-19 18:28:32.508477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.046 [2024-11-19 18:28:32.508492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.309 [2024-11-19 18:28:32.512688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.309 [2024-11-19 18:28:32.512732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.309 [2024-11-19 18:28:32.512748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.309 [2024-11-19 18:28:32.516710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.309 [2024-11-19 18:28:32.516787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.309 [2024-11-19 18:28:32.516801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.309 [2024-11-19 18:28:32.519743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.309 [2024-11-19 18:28:32.519804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.309 [2024-11-19 18:28:32.519819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.309 [2024-11-19 18:28:32.522598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.309 [2024-11-19 18:28:32.522649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.309 [2024-11-19 18:28:32.522664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.309 [2024-11-19 18:28:32.525376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.309 [2024-11-19 18:28:32.525435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.309 [2024-11-19 18:28:32.525451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.309 [2024-11-19 18:28:32.528252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.309 [2024-11-19 18:28:32.528333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.309 [2024-11-19 18:28:32.528348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.309 [2024-11-19 18:28:32.531055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.309 [2024-11-19 18:28:32.531128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.309 [2024-11-19 18:28:32.531144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.309 [2024-11-19 18:28:32.533901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.309 [2024-11-19 18:28:32.533961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.309 [2024-11-19 18:28:32.533976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.309 [2024-11-19 18:28:32.536464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.309 [2024-11-19 18:28:32.536512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.536526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.538956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.539029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.539044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.541699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.541752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.541766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.544987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.545053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.545068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.548210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.548272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.548287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.550766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.550811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.550827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.553420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.553466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.553481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.556031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.556085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.556100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.558689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.558741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.558756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.561267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.561310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.561325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.563662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.563720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.563735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.566267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.566311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.566326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.568802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.568848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.568864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.571174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.571220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.571235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.573584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.573636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.573652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.575951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.576011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.576029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.578337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.578381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.578397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.580703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.580751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.580766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.583259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.583303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.583318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.586885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.587151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.587171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.592395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.592455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.592469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.594909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.594949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.594964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.597586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.597630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.597644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.600132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.600211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.600226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.602740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.602796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.602811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.605329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.605392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.605407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.607903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.607951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.310 [2024-11-19 18:28:32.607966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.310 [2024-11-19 18:28:32.610378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.310 [2024-11-19 18:28:32.610423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.610438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.612950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.613009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.613024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.615495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.615539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.615553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.618025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.618080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.618095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.620612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.620656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.620671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.623094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.623138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.623153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.625709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.625749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.625764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.628516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.628567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.628582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.631167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.631216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.631231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.634577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.634668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.634683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.639470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.639582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.639598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.646292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.646521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.646536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.656440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.656742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.656758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.665898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.666184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.666199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.675027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.675258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.675276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.682700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.682877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.682892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.688240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.688405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.688420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.693290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.693391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.693406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.698287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.698454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.698469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.703339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.703553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.703568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.708324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.708497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.708513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.713330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.713437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.713451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.719281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.719449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.719463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.724299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.724419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.724433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.729294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.729420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.729436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.733950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.734035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.734050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.742628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.742978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.311 [2024-11-19 18:28:32.742994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.311 [2024-11-19 18:28:32.748256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.311 [2024-11-19 18:28:32.748362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.312 [2024-11-19 18:28:32.748377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.312 [2024-11-19 18:28:32.752661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.312 [2024-11-19 18:28:32.752759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.312 [2024-11-19 18:28:32.752774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.312 [2024-11-19 18:28:32.757529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.312 [2024-11-19 18:28:32.757676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.312 [2024-11-19 18:28:32.757691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.312 [2024-11-19 18:28:32.762151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.312 [2024-11-19 18:28:32.762284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.312 [2024-11-19 18:28:32.762299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.312 [2024-11-19 18:28:32.766611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.312 [2024-11-19 18:28:32.766781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.312 [2024-11-19 18:28:32.766796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.312 [2024-11-19 18:28:32.770383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.312 [2024-11-19 18:28:32.770510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.312 [2024-11-19 18:28:32.770525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.312 [2024-11-19 18:28:32.774426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.312 [2024-11-19 18:28:32.774606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.312 [2024-11-19 18:28:32.774620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.574 [2024-11-19 18:28:32.779577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.574 [2024-11-19 18:28:32.779662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.574 [2024-11-19 18:28:32.779677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.574 [2024-11-19 18:28:32.785715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.574 [2024-11-19 18:28:32.785802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.574 [2024-11-19 18:28:32.785817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.574 [2024-11-19 18:28:32.789154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.789248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.789263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.792609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.792694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.792708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.796674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.796752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.796767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.799549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.799650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.799664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.802421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.802482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.802499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.805260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.805320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.805335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.808473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.808563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.808578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.812949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.813142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.813157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.817813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.817913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.817928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.822302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.822378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.822393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.829017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.829295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.829310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.836717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.837005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.837022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.843883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.844213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.844229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.850652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.850776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.850792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.856393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.856488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.856503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.861053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.861135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.861150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.867453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.867726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.867742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.874915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.875219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.875235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.881282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.881352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.881367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.886747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.887046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.887063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.891002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.891066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.891081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.894960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.895004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.895019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.899640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.899705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.899721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.904826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.904875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.904890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.909526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.909830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.909846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.914320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.914435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.914450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.575 [2024-11-19 18:28:32.917952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.575 [2024-11-19 18:28:32.918008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.575 [2024-11-19 18:28:32.918023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.921013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.921090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.921105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.924135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.924225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.924240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.928168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.928343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.928358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.933240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.933305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.933326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.936819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.936892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.936907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.939899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.940024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.940039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.942933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.942988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.943002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.946561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.946692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.946707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.949643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.949715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.949730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.952169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.952250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.952265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.954622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.954677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.954692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.957030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.957074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.957089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.959673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.959732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.959747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.962477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.962549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.962563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.965327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.965422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.965437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.968147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.968238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.968253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.970902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.970989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.971004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.973611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.973674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.973689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.976452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.976545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.976560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.979893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.979984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.979999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.982629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.982737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.982752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.985415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.985481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.985496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.988185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.988233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.988247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.990929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.990982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.990997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.993666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.993730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.576 [2024-11-19 18:28:32.993745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.576 [2024-11-19 18:28:32.996438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.576 [2024-11-19 18:28:32.996560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.577 [2024-11-19 18:28:32.996576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.577 [2024-11-19 18:28:32.999149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.577 [2024-11-19 18:28:32.999249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.577 [2024-11-19 18:28:32.999264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.577 [2024-11-19 18:28:33.001851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.577 [2024-11-19 18:28:33.001934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.577 [2024-11-19 18:28:33.001949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.577 [2024-11-19 18:28:33.004726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.577 [2024-11-19 18:28:33.004826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.577 [2024-11-19 18:28:33.004841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.577 [2024-11-19 18:28:33.007903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.577 [2024-11-19 18:28:33.007977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.577 [2024-11-19 18:28:33.007995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.577 [2024-11-19 18:28:33.015057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.577 [2024-11-19 18:28:33.015311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.577 [2024-11-19 18:28:33.015327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.577 [2024-11-19 18:28:33.023863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.577 [2024-11-19 18:28:33.024064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.577 [2024-11-19 18:28:33.024079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.577 [2024-11-19 18:28:33.033438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.577 [2024-11-19 18:28:33.033666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.577 [2024-11-19 18:28:33.033681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.577 [2024-11-19 18:28:33.039769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.577 [2024-11-19 18:28:33.040014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.577 [2024-11-19 18:28:33.040029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.839 [2024-11-19 18:28:33.044658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.839 [2024-11-19 18:28:33.044862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.839 [2024-11-19 18:28:33.044878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.839 [2024-11-19 18:28:33.048431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.839 [2024-11-19 18:28:33.048518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.839 [2024-11-19 18:28:33.048533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.839 [2024-11-19 18:28:33.050924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.839 [2024-11-19 18:28:33.051018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.839 [2024-11-19 18:28:33.051033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.839 [2024-11-19 18:28:33.053365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.839 [2024-11-19 18:28:33.053455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.839 [2024-11-19 18:28:33.053470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.839 [2024-11-19 18:28:33.055872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.839 [2024-11-19 18:28:33.055968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.839 [2024-11-19 18:28:33.055983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.839 [2024-11-19 18:28:33.058266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.058361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.058376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.060763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.060854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.060869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.063463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.063556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.063571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.066860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.066941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.066956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.069919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.070001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.070016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.072529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.072617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.072632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.075042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.075125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.075140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.077538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.077630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.077645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.079929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.080021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.080036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.082312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.082402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.082417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.084719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.084813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.084828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.087102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.087197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.087212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.089718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.089812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.089827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.092458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.092551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.092565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.095281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.095372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.095387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.098716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.098817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.098832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.104926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.105179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.105197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.110535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.110606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.110621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.113651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.113728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.113743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.116717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.116803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.116818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.121640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.121746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.121761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.126120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.126231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.126247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.130660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.130812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.130827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.135543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.135585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.135600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.138408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.138463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.138478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.140979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.141056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.141070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.143591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.840 [2024-11-19 18:28:33.143679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.840 [2024-11-19 18:28:33.143694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.840 [2024-11-19 18:28:33.146135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.146212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.146227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.148682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.148749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.148763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.151223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.151286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.151301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.154006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.154048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.154063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.157574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.157811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.157827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.161358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.161435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.161450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.164022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.164062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.164077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.166766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.166814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.166829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.169332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.169385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.169400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.172054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.172103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.172117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.174713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.174773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.174788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.177773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.177828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.177842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.180652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.180705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.180720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.183018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.183077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.183091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.185414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.185478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.185493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.188068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.188142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.188165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.191060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.191176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.191192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.194911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.195016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.195031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.202140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.202351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.202366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.207027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.207125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.207140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.214365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.214699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.214714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.218699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.218813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.218827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.222121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.222241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.222256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.227973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.228215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.228230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.233076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.233170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.233185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.237695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.237972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.237987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.242173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.242398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.841 [2024-11-19 18:28:33.242413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.841 [2024-11-19 18:28:33.245765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.841 [2024-11-19 18:28:33.245860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.245874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.248795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.248875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.248890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.251526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.251615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.251630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.254193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.254286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.254301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.256937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.257027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.257042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.259587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.259702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.259717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.262605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.262708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.262723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.265244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.265313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.265327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.269861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.270047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.270061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.276128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.276344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.276359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.282128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.282305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.282321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.287556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.287638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.287653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.291403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.291486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.291501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.296096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.296182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.296197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.300932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.301170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.301187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.842 [2024-11-19 18:28:33.304655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:31.842 [2024-11-19 18:28:33.304738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.842 [2024-11-19 18:28:33.304753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.309442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.309548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.309563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.316684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.316770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.316785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.321303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.321399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.321414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.325434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.325543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.325560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.328576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.328666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.328681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.332592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.332707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.332722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.337601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.337781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.337796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.341487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.341603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.341618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.345408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.345502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.345517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.103 [2024-11-19 18:28:33.350470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.350607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.350623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.103 7712.50 IOPS, 964.06 MiB/s [2024-11-19T17:28:33.574Z] [2024-11-19 18:28:33.356126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21a1860) with pdu=0x2000166ff3c8 00:29:32.103 [2024-11-19 18:28:33.356226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.103 [2024-11-19 18:28:33.356242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.103 00:29:32.103 Latency(us) 00:29:32.103 [2024-11-19T17:28:33.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.103 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:32.103 nvme0n1 : 2.01 7701.76 962.72 0.00 0.00 2072.38 1140.05 13653.33 00:29:32.103 [2024-11-19T17:28:33.574Z] =================================================================================================================== 00:29:32.103 [2024-11-19T17:28:33.574Z] Total : 7701.76 962.72 0.00 0.00 2072.38 1140.05 13653.33 00:29:32.103 { 00:29:32.103 "results": [ 00:29:32.103 { 00:29:32.103 "job": "nvme0n1", 00:29:32.103 "core_mask": "0x2", 00:29:32.103 "workload": "randwrite", 00:29:32.103 "status": "finished", 00:29:32.103 "queue_depth": 16, 00:29:32.103 "io_size": 131072, 00:29:32.103 "runtime": 2.005386, 00:29:32.103 "iops": 7701.759162575185, 00:29:32.103 "mibps": 962.7198953218981, 00:29:32.103 "io_failed": 0, 00:29:32.103 "io_timeout": 0, 00:29:32.103 "avg_latency_us": 2072.382404661703, 00:29:32.103 "min_latency_us": 1140.0533333333333, 00:29:32.103 "max_latency_us": 13653.333333333334 00:29:32.103 } 00:29:32.103 ], 00:29:32.103 "core_count": 1 00:29:32.103 } 00:29:32.103 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:32.103 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:32.103 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:32.103 | .driver_specific 00:29:32.103 | .nvme_error 00:29:32.103 | .status_code 00:29:32.103 | .command_transient_transport_error' 00:29:32.103 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:32.103 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 499 > 0 )) 00:29:32.103 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2164950 00:29:32.103 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2164950 ']' 00:29:32.103 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2164950 00:29:32.103 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:32.104 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2164950 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2164950' 00:29:32.364 killing process with pid 2164950 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2164950 00:29:32.364 Received shutdown signal, test time was about 2.000000 seconds 00:29:32.364 00:29:32.364 Latency(us) 00:29:32.364 [2024-11-19T17:28:33.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.364 [2024-11-19T17:28:33.835Z] =================================================================================================================== 00:29:32.364 [2024-11-19T17:28:33.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2164950 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2162548 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2162548 ']' 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2162548 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2162548 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.364 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2162548' 00:29:32.365 killing process with pid 2162548 00:29:32.365 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2162548 00:29:32.365 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2162548 00:29:32.625 00:29:32.625 real 0m16.511s 00:29:32.625 user 0m32.620s 00:29:32.625 sys 0m3.614s 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.625 ************************************ 00:29:32.625 END TEST nvmf_digest_error 00:29:32.625 ************************************ 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.625 18:28:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.625 rmmod nvme_tcp 00:29:32.625 rmmod nvme_fabrics 00:29:32.625 rmmod nvme_keyring 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2162548 ']' 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2162548 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2162548 ']' 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2162548 00:29:32.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2162548) - No such process 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2162548 is not found' 00:29:32.625 Process with pid 2162548 is not found 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.625 18:28:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:35.170 00:29:35.170 real 0m42.693s 00:29:35.170 user 1m7.181s 00:29:35.170 sys 0m13.154s 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:35.170 ************************************ 00:29:35.170 END TEST nvmf_digest 00:29:35.170 ************************************ 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.170 ************************************ 00:29:35.170 START TEST nvmf_bdevperf 00:29:35.170 ************************************ 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:35.170 * Looking for test storage... 00:29:35.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.170 --rc genhtml_branch_coverage=1 00:29:35.170 --rc genhtml_function_coverage=1 00:29:35.170 --rc genhtml_legend=1 00:29:35.170 --rc geninfo_all_blocks=1 00:29:35.170 --rc geninfo_unexecuted_blocks=1 00:29:35.170 00:29:35.170 ' 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.170 --rc genhtml_branch_coverage=1 00:29:35.170 --rc genhtml_function_coverage=1 00:29:35.170 --rc genhtml_legend=1 00:29:35.170 --rc geninfo_all_blocks=1 00:29:35.170 --rc geninfo_unexecuted_blocks=1 00:29:35.170 00:29:35.170 ' 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.170 --rc genhtml_branch_coverage=1 00:29:35.170 --rc genhtml_function_coverage=1 00:29:35.170 --rc genhtml_legend=1 00:29:35.170 --rc geninfo_all_blocks=1 00:29:35.170 --rc geninfo_unexecuted_blocks=1 00:29:35.170 00:29:35.170 ' 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.170 --rc genhtml_branch_coverage=1 00:29:35.170 --rc genhtml_function_coverage=1 00:29:35.170 --rc genhtml_legend=1 00:29:35.170 --rc geninfo_all_blocks=1 00:29:35.170 --rc geninfo_unexecuted_blocks=1 00:29:35.170 00:29:35.170 ' 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.170 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:35.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.171 18:28:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:43.316 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.316 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:43.316 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:43.317 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:43.317 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:29:43.317 00:29:43.317 --- 10.0.0.2 ping statistics --- 00:29:43.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.317 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:29:43.317 00:29:43.317 --- 10.0.0.1 ping statistics --- 00:29:43.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.317 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2169964 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2169964 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2169964 ']' 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.317 18:28:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.317 [2024-11-19 18:28:43.981477] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:43.317 [2024-11-19 18:28:43.981543] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.317 [2024-11-19 18:28:44.079996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:43.317 [2024-11-19 18:28:44.133372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.317 [2024-11-19 18:28:44.133422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.317 [2024-11-19 18:28:44.133431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.317 [2024-11-19 18:28:44.133443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.317 [2024-11-19 18:28:44.133449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.317 [2024-11-19 18:28:44.135257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.317 [2024-11-19 18:28:44.135555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.317 [2024-11-19 18:28:44.135557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.579 [2024-11-19 18:28:44.858666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.579 Malloc0 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:43.579 [2024-11-19 18:28:44.937563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:43.579 { 00:29:43.579 "params": { 00:29:43.579 "name": "Nvme$subsystem", 00:29:43.579 "trtype": "$TEST_TRANSPORT", 00:29:43.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:43.579 "adrfam": "ipv4", 00:29:43.579 "trsvcid": "$NVMF_PORT", 00:29:43.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:43.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:43.579 "hdgst": ${hdgst:-false}, 00:29:43.579 "ddgst": ${ddgst:-false} 00:29:43.579 }, 00:29:43.579 "method": "bdev_nvme_attach_controller" 00:29:43.579 } 00:29:43.579 EOF 00:29:43.579 )") 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:43.579 18:28:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:43.579 "params": { 00:29:43.579 "name": "Nvme1", 00:29:43.579 "trtype": "tcp", 00:29:43.579 "traddr": "10.0.0.2", 00:29:43.579 "adrfam": "ipv4", 00:29:43.579 "trsvcid": "4420", 00:29:43.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:43.580 "hdgst": false, 00:29:43.580 "ddgst": false 00:29:43.580 }, 00:29:43.580 "method": "bdev_nvme_attach_controller" 00:29:43.580 }' 00:29:43.580 [2024-11-19 18:28:44.998264] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:43.580 [2024-11-19 18:28:44.998333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170023 ] 00:29:43.841 [2024-11-19 18:28:45.090115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.841 [2024-11-19 18:28:45.143150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.103 Running I/O for 1 seconds... 00:29:45.126 8448.00 IOPS, 33.00 MiB/s 00:29:45.126 Latency(us) 00:29:45.126 [2024-11-19T17:28:46.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.126 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:45.126 Verification LBA range: start 0x0 length 0x4000 00:29:45.126 Nvme1n1 : 1.01 8483.06 33.14 0.00 0.00 15022.19 2730.67 14417.92 00:29:45.126 [2024-11-19T17:28:46.597Z] =================================================================================================================== 00:29:45.126 [2024-11-19T17:28:46.597Z] Total : 8483.06 33.14 0.00 0.00 15022.19 2730.67 14417.92 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2170342 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:45.126 { 00:29:45.126 "params": { 00:29:45.126 "name": "Nvme$subsystem", 00:29:45.126 "trtype": "$TEST_TRANSPORT", 00:29:45.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:45.126 "adrfam": "ipv4", 00:29:45.126 "trsvcid": "$NVMF_PORT", 00:29:45.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:45.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:45.126 "hdgst": ${hdgst:-false}, 00:29:45.126 "ddgst": ${ddgst:-false} 00:29:45.126 }, 00:29:45.126 "method": "bdev_nvme_attach_controller" 00:29:45.126 } 00:29:45.126 EOF 00:29:45.126 )") 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:45.126 18:28:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:45.126 "params": { 00:29:45.126 "name": "Nvme1", 00:29:45.126 "trtype": "tcp", 00:29:45.126 "traddr": "10.0.0.2", 00:29:45.126 "adrfam": "ipv4", 00:29:45.126 "trsvcid": "4420", 00:29:45.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:45.126 "hdgst": false, 00:29:45.126 "ddgst": false 00:29:45.126 }, 00:29:45.126 "method": "bdev_nvme_attach_controller" 00:29:45.126 }' 00:29:45.126 [2024-11-19 18:28:46.501272] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:45.127 [2024-11-19 18:28:46.501327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170342 ] 00:29:45.127 [2024-11-19 18:28:46.587594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.432 [2024-11-19 18:28:46.623225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.432 Running I/O for 15 seconds... 00:29:47.344 11312.00 IOPS, 44.19 MiB/s [2024-11-19T17:28:49.760Z] 11305.00 IOPS, 44.16 MiB/s [2024-11-19T17:28:49.760Z] 18:28:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2169964 00:29:48.289 18:28:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:48.289 [2024-11-19 18:28:49.464255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.289 [2024-11-19 18:28:49.464702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.289 [2024-11-19 18:28:49.464722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.289 [2024-11-19 18:28:49.464732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.464986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.464994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.290 [2024-11-19 18:28:49.465364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.290 [2024-11-19 18:28:49.465383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.290 [2024-11-19 18:28:49.465393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.291 [2024-11-19 18:28:49.465503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.291 [2024-11-19 18:28:49.465928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.465987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.465994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.466005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.466012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.466023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.466030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.466040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.466048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.291 [2024-11-19 18:28:49.466057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.291 [2024-11-19 18:28:49.466064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.292 [2024-11-19 18:28:49.466405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.292 [2024-11-19 18:28:49.466423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.292 [2024-11-19 18:28:49.466440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.292 [2024-11-19 18:28:49.466458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.292 [2024-11-19 18:28:49.466474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.292 [2024-11-19 18:28:49.466491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.292 [2024-11-19 18:28:49.466508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f390 is same with the state(6) to be set 00:29:48.292 [2024-11-19 18:28:49.466526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:48.292 [2024-11-19 18:28:49.466533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:48.292 [2024-11-19 18:28:49.466540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111912 len:8 PRP1 0x0 PRP2 0x0 00:29:48.292 [2024-11-19 18:28:49.466548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.292 [2024-11-19 18:28:49.466637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.292 [2024-11-19 18:28:49.466654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.292 [2024-11-19 18:28:49.466669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.292 [2024-11-19 18:28:49.466684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.292 [2024-11-19 18:28:49.466691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.292 [2024-11-19 18:28:49.470192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.292 [2024-11-19 18:28:49.470213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.292 [2024-11-19 18:28:49.471008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.292 [2024-11-19 18:28:49.471026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.292 [2024-11-19 18:28:49.471034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.292 [2024-11-19 18:28:49.471264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.292 [2024-11-19 18:28:49.471485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.292 [2024-11-19 18:28:49.471494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.292 [2024-11-19 18:28:49.471504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.292 [2024-11-19 18:28:49.471512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.292 [2024-11-19 18:28:49.484264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.292 [2024-11-19 18:28:49.484828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.293 [2024-11-19 18:28:49.484846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.293 [2024-11-19 18:28:49.484854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.293 [2024-11-19 18:28:49.485073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.293 [2024-11-19 18:28:49.485301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.293 [2024-11-19 18:28:49.485310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.293 [2024-11-19 18:28:49.485318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.293 [2024-11-19 18:28:49.485325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.293 [2024-11-19 18:28:49.498098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.293 [2024-11-19 18:28:49.498752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.293 [2024-11-19 18:28:49.498793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.293 [2024-11-19 18:28:49.498804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.293 [2024-11-19 18:28:49.499047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.293 [2024-11-19 18:28:49.499283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.293 [2024-11-19 18:28:49.499293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.293 [2024-11-19 18:28:49.499303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.293 [2024-11-19 18:28:49.499311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.293 [2024-11-19 18:28:49.512085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.293 [2024-11-19 18:28:49.512704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.293 [2024-11-19 18:28:49.512745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.293 [2024-11-19 18:28:49.512757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.293 [2024-11-19 18:28:49.512997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.293 [2024-11-19 18:28:49.513233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.293 [2024-11-19 18:28:49.513249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.293 [2024-11-19 18:28:49.513257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.293 [2024-11-19 18:28:49.513265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.293 [2024-11-19 18:28:49.526029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.293 [2024-11-19 18:28:49.526711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.293 [2024-11-19 18:28:49.526753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.293 [2024-11-19 18:28:49.526764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.293 [2024-11-19 18:28:49.527006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.293 [2024-11-19 18:28:49.527240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.293 [2024-11-19 18:28:49.527251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.293 [2024-11-19 18:28:49.527259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.293 [2024-11-19 18:28:49.527267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.293 [2024-11-19 18:28:49.540026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.293 [2024-11-19 18:28:49.540666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.293 [2024-11-19 18:28:49.540710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.293 [2024-11-19 18:28:49.540721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.293 [2024-11-19 18:28:49.540963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.293 [2024-11-19 18:28:49.541197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.293 [2024-11-19 18:28:49.541207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.293 [2024-11-19 18:28:49.541215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.293 [2024-11-19 18:28:49.541223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.293 [2024-11-19 18:28:49.553979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.293 [2024-11-19 18:28:49.554642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.293 [2024-11-19 18:28:49.554687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.293 [2024-11-19 18:28:49.554699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.293 [2024-11-19 18:28:49.554942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.293 [2024-11-19 18:28:49.555178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.293 [2024-11-19 18:28:49.555189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.293 [2024-11-19 18:28:49.555197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.293 [2024-11-19 18:28:49.555210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.293 [2024-11-19 18:28:49.567976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.293 [2024-11-19 18:28:49.568627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.293 [2024-11-19 18:28:49.568674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.293 [2024-11-19 18:28:49.568686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.293 [2024-11-19 18:28:49.568930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.293 [2024-11-19 18:28:49.569156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.293 [2024-11-19 18:28:49.569179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.293 [2024-11-19 18:28:49.569187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.293 [2024-11-19 18:28:49.569196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.293 [2024-11-19 18:28:49.581961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.293 [2024-11-19 18:28:49.582528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.293 [2024-11-19 18:28:49.582552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.293 [2024-11-19 18:28:49.582561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.293 [2024-11-19 18:28:49.582782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.293 [2024-11-19 18:28:49.583003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.293 [2024-11-19 18:28:49.583013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.293 [2024-11-19 18:28:49.583021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.293 [2024-11-19 18:28:49.583028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.293 [2024-11-19 18:28:49.595805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.293 [2024-11-19 18:28:49.596360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.293 [2024-11-19 18:28:49.596383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.293 [2024-11-19 18:28:49.596392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.293 [2024-11-19 18:28:49.596613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.293 [2024-11-19 18:28:49.596834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.596846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.596854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.596861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.609647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.610411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.294 [2024-11-19 18:28:49.610468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.294 [2024-11-19 18:28:49.610480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.294 [2024-11-19 18:28:49.610731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.294 [2024-11-19 18:28:49.610959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.610970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.610979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.610988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.623576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.624281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.294 [2024-11-19 18:28:49.624342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.294 [2024-11-19 18:28:49.624354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.294 [2024-11-19 18:28:49.624608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.294 [2024-11-19 18:28:49.624836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.624848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.624857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.624865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.637461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.638188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.294 [2024-11-19 18:28:49.638253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.294 [2024-11-19 18:28:49.638267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.294 [2024-11-19 18:28:49.638526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.294 [2024-11-19 18:28:49.638755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.638766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.638776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.638785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.651379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.652087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.294 [2024-11-19 18:28:49.652151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.294 [2024-11-19 18:28:49.652179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.294 [2024-11-19 18:28:49.652444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.294 [2024-11-19 18:28:49.652674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.652685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.652694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.652703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.665287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.665994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.294 [2024-11-19 18:28:49.666059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.294 [2024-11-19 18:28:49.666072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.294 [2024-11-19 18:28:49.666343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.294 [2024-11-19 18:28:49.666575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.666586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.666595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.666604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.679183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.679898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.294 [2024-11-19 18:28:49.679964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.294 [2024-11-19 18:28:49.679977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.294 [2024-11-19 18:28:49.680250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.294 [2024-11-19 18:28:49.680480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.680491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.680500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.680509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.693245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.693929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.294 [2024-11-19 18:28:49.693993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.294 [2024-11-19 18:28:49.694006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.294 [2024-11-19 18:28:49.694277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.294 [2024-11-19 18:28:49.694509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.694528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.694537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.694546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.707143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.707868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.294 [2024-11-19 18:28:49.707932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.294 [2024-11-19 18:28:49.707945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.294 [2024-11-19 18:28:49.708217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.294 [2024-11-19 18:28:49.708447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.708458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.708467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.708477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.721050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.721740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.294 [2024-11-19 18:28:49.721806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.294 [2024-11-19 18:28:49.721819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.294 [2024-11-19 18:28:49.722076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.294 [2024-11-19 18:28:49.722319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.294 [2024-11-19 18:28:49.722332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.294 [2024-11-19 18:28:49.722343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.294 [2024-11-19 18:28:49.722353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.294 [2024-11-19 18:28:49.734951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.294 [2024-11-19 18:28:49.735597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.295 [2024-11-19 18:28:49.735628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.295 [2024-11-19 18:28:49.735638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.295 [2024-11-19 18:28:49.735861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.295 [2024-11-19 18:28:49.736085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.295 [2024-11-19 18:28:49.736099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.295 [2024-11-19 18:28:49.736108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.295 [2024-11-19 18:28:49.736124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.295 [2024-11-19 18:28:49.748907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.295 [2024-11-19 18:28:49.749475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.295 [2024-11-19 18:28:49.749499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.295 [2024-11-19 18:28:49.749507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.295 [2024-11-19 18:28:49.749728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.295 [2024-11-19 18:28:49.749949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.295 [2024-11-19 18:28:49.749961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.295 [2024-11-19 18:28:49.749969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.295 [2024-11-19 18:28:49.749978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.558 [2024-11-19 18:28:49.762755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.558 [2024-11-19 18:28:49.763310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-11-19 18:28:49.763333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.558 [2024-11-19 18:28:49.763341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.558 [2024-11-19 18:28:49.763562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.558 [2024-11-19 18:28:49.763783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.558 [2024-11-19 18:28:49.763795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.558 [2024-11-19 18:28:49.763802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.558 [2024-11-19 18:28:49.763810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.558 [2024-11-19 18:28:49.776580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.558 [2024-11-19 18:28:49.777209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-11-19 18:28:49.777262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.558 [2024-11-19 18:28:49.777275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.558 [2024-11-19 18:28:49.777522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.558 [2024-11-19 18:28:49.777748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.558 [2024-11-19 18:28:49.777759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.558 [2024-11-19 18:28:49.777768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.558 [2024-11-19 18:28:49.777777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.558 10082.00 IOPS, 39.38 MiB/s [2024-11-19T17:28:50.029Z] [2024-11-19 18:28:49.790569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.558 [2024-11-19 18:28:49.791056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-11-19 18:28:49.791082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.558 [2024-11-19 18:28:49.791091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.558 [2024-11-19 18:28:49.791329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.558 [2024-11-19 18:28:49.791554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.558 [2024-11-19 18:28:49.791567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.558 [2024-11-19 18:28:49.791575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.558 [2024-11-19 18:28:49.791582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.558 [2024-11-19 18:28:49.804558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.558 [2024-11-19 18:28:49.805104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-11-19 18:28:49.805128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.558 [2024-11-19 18:28:49.805136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.558 [2024-11-19 18:28:49.805365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.558 [2024-11-19 18:28:49.805589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.558 [2024-11-19 18:28:49.805600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.558 [2024-11-19 18:28:49.805608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.558 [2024-11-19 18:28:49.805615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.558 [2024-11-19 18:28:49.818398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.558 [2024-11-19 18:28:49.819099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-11-19 18:28:49.819175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.558 [2024-11-19 18:28:49.819189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.558 [2024-11-19 18:28:49.819446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.558 [2024-11-19 18:28:49.819675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.558 [2024-11-19 18:28:49.819687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.558 [2024-11-19 18:28:49.819696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.558 [2024-11-19 18:28:49.819705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.558 [2024-11-19 18:28:49.832296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.558 [2024-11-19 18:28:49.833000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-11-19 18:28:49.833064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.558 [2024-11-19 18:28:49.833077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.558 [2024-11-19 18:28:49.833357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.558 [2024-11-19 18:28:49.833587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.558 [2024-11-19 18:28:49.833599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.558 [2024-11-19 18:28:49.833609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.558 [2024-11-19 18:28:49.833618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.558 [2024-11-19 18:28:49.846211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.558 [2024-11-19 18:28:49.846920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-11-19 18:28:49.846985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.558 [2024-11-19 18:28:49.846998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.558 [2024-11-19 18:28:49.847272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.558 [2024-11-19 18:28:49.847502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.558 [2024-11-19 18:28:49.847514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.558 [2024-11-19 18:28:49.847523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.558 [2024-11-19 18:28:49.847532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.860111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.860793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.860858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.860871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.861129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.861374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.861387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.559 [2024-11-19 18:28:49.861396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.559 [2024-11-19 18:28:49.861405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.874040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.874689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.874721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.874730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.874954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.875187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.875215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.559 [2024-11-19 18:28:49.875224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.559 [2024-11-19 18:28:49.875232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.888011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.888613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.888640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.888648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.888871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.889093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.889106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.559 [2024-11-19 18:28:49.889114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.559 [2024-11-19 18:28:49.889122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.901919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.902489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.902514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.902523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.902745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.902968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.902980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.559 [2024-11-19 18:28:49.902988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.559 [2024-11-19 18:28:49.902996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.915808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.916542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.916607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.916621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.916877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.917106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.917117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.559 [2024-11-19 18:28:49.917126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.559 [2024-11-19 18:28:49.917142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.929729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.930489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.930554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.930567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.930825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.931054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.931066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.559 [2024-11-19 18:28:49.931075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.559 [2024-11-19 18:28:49.931084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.943680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.944282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.944348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.944364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.944620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.944849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.944861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.559 [2024-11-19 18:28:49.944871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.559 [2024-11-19 18:28:49.944881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.957700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.958316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.958382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.958396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.958654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.958883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.958895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.559 [2024-11-19 18:28:49.958905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.559 [2024-11-19 18:28:49.958914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.971718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.972469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.972533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.972547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.972804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.973033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.973045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.559 [2024-11-19 18:28:49.973054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.559 [2024-11-19 18:28:49.973064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.559 [2024-11-19 18:28:49.985678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.559 [2024-11-19 18:28:49.986296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-11-19 18:28:49.986362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.559 [2024-11-19 18:28:49.986377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.559 [2024-11-19 18:28:49.986635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.559 [2024-11-19 18:28:49.986864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.559 [2024-11-19 18:28:49.986875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.560 [2024-11-19 18:28:49.986884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.560 [2024-11-19 18:28:49.986893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.560 [2024-11-19 18:28:49.999504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.560 [2024-11-19 18:28:50.000133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-11-19 18:28:50.000172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.560 [2024-11-19 18:28:50.000183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.560 [2024-11-19 18:28:50.000407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.560 [2024-11-19 18:28:50.000630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.560 [2024-11-19 18:28:50.000644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.560 [2024-11-19 18:28:50.000652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.560 [2024-11-19 18:28:50.000660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.560 [2024-11-19 18:28:50.013525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.560 [2024-11-19 18:28:50.014150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-11-19 18:28:50.014191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.560 [2024-11-19 18:28:50.014200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.560 [2024-11-19 18:28:50.014431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.560 [2024-11-19 18:28:50.014657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.560 [2024-11-19 18:28:50.014668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.560 [2024-11-19 18:28:50.014677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.560 [2024-11-19 18:28:50.014685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.822 [2024-11-19 18:28:50.027488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.822 [2024-11-19 18:28:50.028057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.822 [2024-11-19 18:28:50.028084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.822 [2024-11-19 18:28:50.028095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.822 [2024-11-19 18:28:50.028324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.822 [2024-11-19 18:28:50.028550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.822 [2024-11-19 18:28:50.028562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.822 [2024-11-19 18:28:50.028571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.822 [2024-11-19 18:28:50.028579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.822 [2024-11-19 18:28:50.041372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.822 [2024-11-19 18:28:50.041975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.822 [2024-11-19 18:28:50.042000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.822 [2024-11-19 18:28:50.042010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.822 [2024-11-19 18:28:50.042240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.822 [2024-11-19 18:28:50.042464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.822 [2024-11-19 18:28:50.042477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.822 [2024-11-19 18:28:50.042485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.822 [2024-11-19 18:28:50.042494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.822 [2024-11-19 18:28:50.055276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.822 [2024-11-19 18:28:50.055940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.822 [2024-11-19 18:28:50.056004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.822 [2024-11-19 18:28:50.056018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.822 [2024-11-19 18:28:50.056289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.822 [2024-11-19 18:28:50.056520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.822 [2024-11-19 18:28:50.056540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.822 [2024-11-19 18:28:50.056549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.822 [2024-11-19 18:28:50.056560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.822 [2024-11-19 18:28:50.069168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.822 [2024-11-19 18:28:50.069803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.822 [2024-11-19 18:28:50.069835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.822 [2024-11-19 18:28:50.069845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.822 [2024-11-19 18:28:50.070070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.822 [2024-11-19 18:28:50.070304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.822 [2024-11-19 18:28:50.070315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.822 [2024-11-19 18:28:50.070325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.822 [2024-11-19 18:28:50.070333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.822 [2024-11-19 18:28:50.083125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.822 [2024-11-19 18:28:50.083745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.822 [2024-11-19 18:28:50.083772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.822 [2024-11-19 18:28:50.083782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.822 [2024-11-19 18:28:50.084004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.822 [2024-11-19 18:28:50.084235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.822 [2024-11-19 18:28:50.084248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.822 [2024-11-19 18:28:50.084257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.822 [2024-11-19 18:28:50.084265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.822 [2024-11-19 18:28:50.097073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.822 [2024-11-19 18:28:50.097663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.822 [2024-11-19 18:28:50.097689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.822 [2024-11-19 18:28:50.097698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.822 [2024-11-19 18:28:50.097921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.822 [2024-11-19 18:28:50.098145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.822 [2024-11-19 18:28:50.098157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.822 [2024-11-19 18:28:50.098179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.822 [2024-11-19 18:28:50.098195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.822 [2024-11-19 18:28:50.111007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.822 [2024-11-19 18:28:50.111605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.822 [2024-11-19 18:28:50.111632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.822 [2024-11-19 18:28:50.111641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.822 [2024-11-19 18:28:50.111864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.822 [2024-11-19 18:28:50.112087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.822 [2024-11-19 18:28:50.112100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.822 [2024-11-19 18:28:50.112108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.822 [2024-11-19 18:28:50.112116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.124911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.125500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.125527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.125536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.125761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.125983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.823 [2024-11-19 18:28:50.125996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.823 [2024-11-19 18:28:50.126004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.823 [2024-11-19 18:28:50.126013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.138793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.139373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.139400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.139408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.139630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.139852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.823 [2024-11-19 18:28:50.139866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.823 [2024-11-19 18:28:50.139874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.823 [2024-11-19 18:28:50.139882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.152669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.153404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.153469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.153483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.153740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.153968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.823 [2024-11-19 18:28:50.153982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.823 [2024-11-19 18:28:50.153991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.823 [2024-11-19 18:28:50.154001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.166610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.167269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.167335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.167348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.167607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.167836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.823 [2024-11-19 18:28:50.167848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.823 [2024-11-19 18:28:50.167858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.823 [2024-11-19 18:28:50.167868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.179850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.180522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.180581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.180592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.180778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.180938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.823 [2024-11-19 18:28:50.180947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.823 [2024-11-19 18:28:50.180953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.823 [2024-11-19 18:28:50.180961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.192497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.193124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.193186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.193198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.193387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.193546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.823 [2024-11-19 18:28:50.193555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.823 [2024-11-19 18:28:50.193561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.823 [2024-11-19 18:28:50.193568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.205217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.205806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.205857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.205866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.206047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.206230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.823 [2024-11-19 18:28:50.206240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.823 [2024-11-19 18:28:50.206247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.823 [2024-11-19 18:28:50.206255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.217914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.218441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.218466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.218474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.218628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.218783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.823 [2024-11-19 18:28:50.218791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.823 [2024-11-19 18:28:50.218798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.823 [2024-11-19 18:28:50.218803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.230609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.231119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.231136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.231143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.231304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.231457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.823 [2024-11-19 18:28:50.231470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.823 [2024-11-19 18:28:50.231476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.823 [2024-11-19 18:28:50.231481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.823 [2024-11-19 18:28:50.243285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.823 [2024-11-19 18:28:50.243840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.823 [2024-11-19 18:28:50.243881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.823 [2024-11-19 18:28:50.243890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.823 [2024-11-19 18:28:50.244063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.823 [2024-11-19 18:28:50.244232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.824 [2024-11-19 18:28:50.244241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.824 [2024-11-19 18:28:50.244247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.824 [2024-11-19 18:28:50.244253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.824 [2024-11-19 18:28:50.255901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.824 [2024-11-19 18:28:50.256501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.824 [2024-11-19 18:28:50.256540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.824 [2024-11-19 18:28:50.256549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.824 [2024-11-19 18:28:50.256720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.824 [2024-11-19 18:28:50.256877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.824 [2024-11-19 18:28:50.256886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.824 [2024-11-19 18:28:50.256893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.824 [2024-11-19 18:28:50.256900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.824 [2024-11-19 18:28:50.268570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.824 [2024-11-19 18:28:50.268953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.824 [2024-11-19 18:28:50.268972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.824 [2024-11-19 18:28:50.268978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.824 [2024-11-19 18:28:50.269131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.824 [2024-11-19 18:28:50.269291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.824 [2024-11-19 18:28:50.269300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.824 [2024-11-19 18:28:50.269307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.824 [2024-11-19 18:28:50.269318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:48.824 [2024-11-19 18:28:50.281237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:48.824 [2024-11-19 18:28:50.281700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.824 [2024-11-19 18:28:50.281715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:48.824 [2024-11-19 18:28:50.281721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:48.824 [2024-11-19 18:28:50.281872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:48.824 [2024-11-19 18:28:50.282024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:48.824 [2024-11-19 18:28:50.282032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:48.824 [2024-11-19 18:28:50.282038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:48.824 [2024-11-19 18:28:50.282045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.086 [2024-11-19 18:28:50.293973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.086 [2024-11-19 18:28:50.294440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.086 [2024-11-19 18:28:50.294456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.086 [2024-11-19 18:28:50.294461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.086 [2024-11-19 18:28:50.294613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.086 [2024-11-19 18:28:50.294764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.086 [2024-11-19 18:28:50.294772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.086 [2024-11-19 18:28:50.294777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.086 [2024-11-19 18:28:50.294784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.306702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.307187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.307201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.307207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.307359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.307510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.087 [2024-11-19 18:28:50.307517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.087 [2024-11-19 18:28:50.307522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.087 [2024-11-19 18:28:50.307528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.319452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.319919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.319932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.319937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.320099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.320257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.087 [2024-11-19 18:28:50.320263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.087 [2024-11-19 18:28:50.320269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.087 [2024-11-19 18:28:50.320273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.332175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.332760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.332792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.332801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.332967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.333122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.087 [2024-11-19 18:28:50.333129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.087 [2024-11-19 18:28:50.333135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.087 [2024-11-19 18:28:50.333142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.344916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.345413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.345430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.345436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.345588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.345740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.087 [2024-11-19 18:28:50.345747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.087 [2024-11-19 18:28:50.345752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.087 [2024-11-19 18:28:50.345757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.357667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.358174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.358179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.358335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.358486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.087 [2024-11-19 18:28:50.358493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.087 [2024-11-19 18:28:50.358498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.087 [2024-11-19 18:28:50.358503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.370410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.370850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.370863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.370869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.371020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.371178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.087 [2024-11-19 18:28:50.371185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.087 [2024-11-19 18:28:50.371190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.087 [2024-11-19 18:28:50.371195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.383097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.383437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.383451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.383456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.383607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.383759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.087 [2024-11-19 18:28:50.383765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.087 [2024-11-19 18:28:50.383770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.087 [2024-11-19 18:28:50.383776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.395828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.396301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.396315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.396321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.396472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.396623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.087 [2024-11-19 18:28:50.396633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.087 [2024-11-19 18:28:50.396639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.087 [2024-11-19 18:28:50.396643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.408551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.409008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.409021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.409026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.409181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.409333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.087 [2024-11-19 18:28:50.409340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.087 [2024-11-19 18:28:50.409345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.087 [2024-11-19 18:28:50.409350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.087 [2024-11-19 18:28:50.421258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.087 [2024-11-19 18:28:50.421736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.087 [2024-11-19 18:28:50.421768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.087 [2024-11-19 18:28:50.421777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.087 [2024-11-19 18:28:50.421943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.087 [2024-11-19 18:28:50.422098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.422105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.422111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.422116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.433890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.434387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.434403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.434409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.088 [2024-11-19 18:28:50.434560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.088 [2024-11-19 18:28:50.434712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.434718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.434724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.434733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.446638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.447082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.447097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.447102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.088 [2024-11-19 18:28:50.447258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.088 [2024-11-19 18:28:50.447411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.447418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.447423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.447428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.459327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.459768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.459800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.459809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.088 [2024-11-19 18:28:50.459976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.088 [2024-11-19 18:28:50.460130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.460137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.460143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.460149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.472060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.472676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.472708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.472717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.088 [2024-11-19 18:28:50.472884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.088 [2024-11-19 18:28:50.473039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.473047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.473053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.473059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.484688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.485320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.485352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.485361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.088 [2024-11-19 18:28:50.485530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.088 [2024-11-19 18:28:50.485685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.485691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.485697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.485703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.497415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.497926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.497942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.497947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.088 [2024-11-19 18:28:50.498099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.088 [2024-11-19 18:28:50.498256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.498263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.498269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.498274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.510170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.510625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.510639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.510644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.088 [2024-11-19 18:28:50.510795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.088 [2024-11-19 18:28:50.510946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.510954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.510959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.510964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.522852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.523433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.523465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.523473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.088 [2024-11-19 18:28:50.523647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.088 [2024-11-19 18:28:50.523802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.523809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.523815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.523821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.535576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.536056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.536087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.536096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.088 [2024-11-19 18:28:50.536271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.088 [2024-11-19 18:28:50.536426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.088 [2024-11-19 18:28:50.536433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.088 [2024-11-19 18:28:50.536438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.088 [2024-11-19 18:28:50.536444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.088 [2024-11-19 18:28:50.548203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.088 [2024-11-19 18:28:50.548807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.088 [2024-11-19 18:28:50.548839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.088 [2024-11-19 18:28:50.548847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.089 [2024-11-19 18:28:50.549014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.089 [2024-11-19 18:28:50.549177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.089 [2024-11-19 18:28:50.549185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.089 [2024-11-19 18:28:50.549191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.089 [2024-11-19 18:28:50.549197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.350 [2024-11-19 18:28:50.560821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.350 [2024-11-19 18:28:50.561446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.350 [2024-11-19 18:28:50.561478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.350 [2024-11-19 18:28:50.561486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.350 [2024-11-19 18:28:50.561654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.561808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.561819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.561825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.561832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.573444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.574073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.574104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.574113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.574289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.574444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.574451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.574457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.574463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.586079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.586639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.586671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.586679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.586847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.587001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.587008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.587014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.587020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.598801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.599409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.599441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.599450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.599617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.599771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.599778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.599784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.599793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.611439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.611942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.611958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.611964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.612116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.612274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.612281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.612286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.612291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.624048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.624523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.624537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.624543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.624694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.624845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.624852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.624857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.624862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.636761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.637258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.637271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.637277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.637428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.637579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.637585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.637590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.637595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.649524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.650118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.650149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.650164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.650332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.650487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.650494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.650500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.650506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.662275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.662844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.662876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.662885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.663054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.663217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.663225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.663232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.663238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.675004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.675511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.675527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.675533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.675685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.675836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.675843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.675848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.675853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.687755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.688086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.688101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.688106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.688267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.688419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.688425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.688430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.688435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.700489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.701041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.701072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.701081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.701254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.701409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.701416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.701422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.701428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.713202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.713682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.713697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.713703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.713854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.714005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.714012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.714017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.714022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.725928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.726300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.726313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.726319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.726470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.726622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.726632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.726637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.726642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.738543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.351 [2024-11-19 18:28:50.738898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.351 [2024-11-19 18:28:50.738912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.351 [2024-11-19 18:28:50.738918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.351 [2024-11-19 18:28:50.739069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.351 [2024-11-19 18:28:50.739226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.351 [2024-11-19 18:28:50.739233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.351 [2024-11-19 18:28:50.739238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.351 [2024-11-19 18:28:50.739244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.351 [2024-11-19 18:28:50.751152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.352 [2024-11-19 18:28:50.751691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.352 [2024-11-19 18:28:50.751723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.352 [2024-11-19 18:28:50.751732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.352 [2024-11-19 18:28:50.751899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.352 [2024-11-19 18:28:50.752053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.352 [2024-11-19 18:28:50.752060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.352 [2024-11-19 18:28:50.752066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.352 [2024-11-19 18:28:50.752072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.352 [2024-11-19 18:28:50.763843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.352 [2024-11-19 18:28:50.764335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.352 [2024-11-19 18:28:50.764351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.352 [2024-11-19 18:28:50.764357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.352 [2024-11-19 18:28:50.764509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.352 [2024-11-19 18:28:50.764660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.352 [2024-11-19 18:28:50.764667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.352 [2024-11-19 18:28:50.764673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.352 [2024-11-19 18:28:50.764681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.352 [2024-11-19 18:28:50.776587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.352 [2024-11-19 18:28:50.777040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.352 [2024-11-19 18:28:50.777053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.352 [2024-11-19 18:28:50.777059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.352 [2024-11-19 18:28:50.777254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.352 [2024-11-19 18:28:50.777407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.352 [2024-11-19 18:28:50.777414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.352 [2024-11-19 18:28:50.777420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.352 [2024-11-19 18:28:50.777425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.352 7561.50 IOPS, 29.54 MiB/s [2024-11-19T17:28:50.823Z] [2024-11-19 18:28:50.789319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.352 [2024-11-19 18:28:50.789910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.352 [2024-11-19 18:28:50.789942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.352 [2024-11-19 18:28:50.789951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.352 [2024-11-19 18:28:50.790118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.352 [2024-11-19 18:28:50.790280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.352 [2024-11-19 18:28:50.790288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.352 [2024-11-19 18:28:50.790294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.352 [2024-11-19 18:28:50.790300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.352 [2024-11-19 18:28:50.801928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.352 [2024-11-19 18:28:50.802210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.352 [2024-11-19 18:28:50.802226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.352 [2024-11-19 18:28:50.802232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.352 [2024-11-19 18:28:50.802384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.352 [2024-11-19 18:28:50.802535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.352 [2024-11-19 18:28:50.802543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.352 [2024-11-19 18:28:50.802548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.352 [2024-11-19 18:28:50.802553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.352 [2024-11-19 18:28:50.814605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.352 [2024-11-19 18:28:50.814947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.352 [2024-11-19 18:28:50.814961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.352 [2024-11-19 18:28:50.814966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.352 [2024-11-19 18:28:50.815117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.352 [2024-11-19 18:28:50.815274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.352 [2024-11-19 18:28:50.815281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.352 [2024-11-19 18:28:50.815286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.352 [2024-11-19 18:28:50.815291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.614 [2024-11-19 18:28:50.827335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.614 [2024-11-19 18:28:50.827818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.614 [2024-11-19 18:28:50.827832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.614 [2024-11-19 18:28:50.827837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.614 [2024-11-19 18:28:50.827989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.614 [2024-11-19 18:28:50.828140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.614 [2024-11-19 18:28:50.828146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.614 [2024-11-19 18:28:50.828151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.614 [2024-11-19 18:28:50.828155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.614 [2024-11-19 18:28:50.840055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.614 [2024-11-19 18:28:50.840542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.614 [2024-11-19 18:28:50.840556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.614 [2024-11-19 18:28:50.840562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.614 [2024-11-19 18:28:50.840713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.614 [2024-11-19 18:28:50.840864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.614 [2024-11-19 18:28:50.840871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.614 [2024-11-19 18:28:50.840876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.614 [2024-11-19 18:28:50.840881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.614 [2024-11-19 18:28:50.852677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.614 [2024-11-19 18:28:50.853165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.614 [2024-11-19 18:28:50.853179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.614 [2024-11-19 18:28:50.853184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.614 [2024-11-19 18:28:50.853338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.614 [2024-11-19 18:28:50.853490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.614 [2024-11-19 18:28:50.853497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.614 [2024-11-19 18:28:50.853502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.614 [2024-11-19 18:28:50.853507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.614 [2024-11-19 18:28:50.865572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.614 [2024-11-19 18:28:50.866051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.614 [2024-11-19 18:28:50.866064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.614 [2024-11-19 18:28:50.866070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.614 [2024-11-19 18:28:50.866224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.614 [2024-11-19 18:28:50.866376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.614 [2024-11-19 18:28:50.866384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.614 [2024-11-19 18:28:50.866389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.614 [2024-11-19 18:28:50.866394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.614 [2024-11-19 18:28:50.878296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.614 [2024-11-19 18:28:50.878873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.614 [2024-11-19 18:28:50.878904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.614 [2024-11-19 18:28:50.878913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.614 [2024-11-19 18:28:50.879080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.614 [2024-11-19 18:28:50.879242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.614 [2024-11-19 18:28:50.879249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.614 [2024-11-19 18:28:50.879254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.614 [2024-11-19 18:28:50.879260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.614 [2024-11-19 18:28:50.891025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.614 [2024-11-19 18:28:50.891535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.614 [2024-11-19 18:28:50.891552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.614 [2024-11-19 18:28:50.891558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.614 [2024-11-19 18:28:50.891709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.614 [2024-11-19 18:28:50.891861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.614 [2024-11-19 18:28:50.891872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.614 [2024-11-19 18:28:50.891877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.614 [2024-11-19 18:28:50.891882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.614 [2024-11-19 18:28:50.903652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.614 [2024-11-19 18:28:50.904124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.614 [2024-11-19 18:28:50.904138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.614 [2024-11-19 18:28:50.904143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.614 [2024-11-19 18:28:50.904299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.614 [2024-11-19 18:28:50.904451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.614 [2024-11-19 18:28:50.904457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.614 [2024-11-19 18:28:50.904462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:50.904467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:50.916375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:50.916824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:50.916837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:50.916842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:50.916993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:50.917144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.615 [2024-11-19 18:28:50.917151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.615 [2024-11-19 18:28:50.917156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:50.917166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:50.929066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:50.929525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:50.929539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:50.929544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:50.929695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:50.929846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.615 [2024-11-19 18:28:50.929853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.615 [2024-11-19 18:28:50.929858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:50.929866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:50.941771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:50.942153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:50.942171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:50.942176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:50.942327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:50.942478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.615 [2024-11-19 18:28:50.942485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.615 [2024-11-19 18:28:50.942491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:50.942496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:50.954396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:50.954884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:50.954896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:50.954902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:50.955053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:50.955208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.615 [2024-11-19 18:28:50.955215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.615 [2024-11-19 18:28:50.955220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:50.955225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:50.967120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:50.967583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:50.967597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:50.967602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:50.967753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:50.967904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.615 [2024-11-19 18:28:50.967911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.615 [2024-11-19 18:28:50.967916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:50.967921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:50.979821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:50.980180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:50.980194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:50.980199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:50.980351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:50.980503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.615 [2024-11-19 18:28:50.980509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.615 [2024-11-19 18:28:50.980515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:50.980520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:50.992556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:50.993042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:50.993056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:50.993062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:50.993223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:50.993375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.615 [2024-11-19 18:28:50.993382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.615 [2024-11-19 18:28:50.993387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:50.993392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:51.005290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:51.005777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:51.005790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:51.005795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:51.005946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:51.006098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.615 [2024-11-19 18:28:51.006104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.615 [2024-11-19 18:28:51.006109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:51.006114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:51.018017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:51.018539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:51.018571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:51.018580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:51.018751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:51.018906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.615 [2024-11-19 18:28:51.018913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.615 [2024-11-19 18:28:51.018919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.615 [2024-11-19 18:28:51.018924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.615 [2024-11-19 18:28:51.030685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.615 [2024-11-19 18:28:51.031271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.615 [2024-11-19 18:28:51.031303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.615 [2024-11-19 18:28:51.031312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.615 [2024-11-19 18:28:51.031479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.615 [2024-11-19 18:28:51.031634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.616 [2024-11-19 18:28:51.031641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.616 [2024-11-19 18:28:51.031647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.616 [2024-11-19 18:28:51.031653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.616 [2024-11-19 18:28:51.043408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.616 [2024-11-19 18:28:51.044009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.616 [2024-11-19 18:28:51.044041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.616 [2024-11-19 18:28:51.044049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.616 [2024-11-19 18:28:51.044223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.616 [2024-11-19 18:28:51.044379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.616 [2024-11-19 18:28:51.044386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.616 [2024-11-19 18:28:51.044392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.616 [2024-11-19 18:28:51.044398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.616 [2024-11-19 18:28:51.056142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.616 [2024-11-19 18:28:51.056626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.616 [2024-11-19 18:28:51.056658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.616 [2024-11-19 18:28:51.056667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.616 [2024-11-19 18:28:51.056835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.616 [2024-11-19 18:28:51.056990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.616 [2024-11-19 18:28:51.057000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.616 [2024-11-19 18:28:51.057006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.616 [2024-11-19 18:28:51.057012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.616 [2024-11-19 18:28:51.068770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.616 [2024-11-19 18:28:51.069384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.616 [2024-11-19 18:28:51.069416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.616 [2024-11-19 18:28:51.069425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.616 [2024-11-19 18:28:51.069592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.616 [2024-11-19 18:28:51.069747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.616 [2024-11-19 18:28:51.069754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.616 [2024-11-19 18:28:51.069759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.616 [2024-11-19 18:28:51.069765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.878 [2024-11-19 18:28:51.081388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.878 [2024-11-19 18:28:51.081975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.878 [2024-11-19 18:28:51.082007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.878 [2024-11-19 18:28:51.082016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.878 [2024-11-19 18:28:51.082191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.878 [2024-11-19 18:28:51.082347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.878 [2024-11-19 18:28:51.082354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.878 [2024-11-19 18:28:51.082360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.878 [2024-11-19 18:28:51.082366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.878 [2024-11-19 18:28:51.094129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.878 [2024-11-19 18:28:51.094723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.878 [2024-11-19 18:28:51.094755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.878 [2024-11-19 18:28:51.094763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.878 [2024-11-19 18:28:51.094931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.878 [2024-11-19 18:28:51.095085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.878 [2024-11-19 18:28:51.095093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.878 [2024-11-19 18:28:51.095099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.878 [2024-11-19 18:28:51.095108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.878 [2024-11-19 18:28:51.106870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.107358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.107374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.107380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.107532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.107683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.107690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.107695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.107700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.879 [2024-11-19 18:28:51.119607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.120099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.120112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.120119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.120275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.120427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.120433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.120439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.120443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.879 [2024-11-19 18:28:51.132346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.132837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.132850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.132856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.133007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.133165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.133172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.133178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.133183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.879 [2024-11-19 18:28:51.145088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.145556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.145588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.145597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.145765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.145920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.145928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.145935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.145941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.879 [2024-11-19 18:28:51.157717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.158315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.158347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.158356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.158523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.158677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.158684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.158691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.158697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.879 [2024-11-19 18:28:51.170462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.171059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.171090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.171099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.171273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.171429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.171435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.171441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.171447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.879 [2024-11-19 18:28:51.183212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.183726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.183758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.183767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.183938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.184092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.184099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.184105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.184111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.879 [2024-11-19 18:28:51.195887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.196488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.196519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.196528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.196695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.196850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.196856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.196862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.196868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.879 [2024-11-19 18:28:51.208629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.209140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.209176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.209185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.209352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.209507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.209514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.209520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.209526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.879 [2024-11-19 18:28:51.221283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.879 [2024-11-19 18:28:51.221785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.879 [2024-11-19 18:28:51.221815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.879 [2024-11-19 18:28:51.221824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.879 [2024-11-19 18:28:51.221992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.879 [2024-11-19 18:28:51.222146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.879 [2024-11-19 18:28:51.222164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.879 [2024-11-19 18:28:51.222172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.879 [2024-11-19 18:28:51.222177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.880 [2024-11-19 18:28:51.233949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.880 [2024-11-19 18:28:51.234504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-19 18:28:51.234536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-19 18:28:51.234545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.880 [2024-11-19 18:28:51.234714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.880 [2024-11-19 18:28:51.234869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.880 [2024-11-19 18:28:51.234875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.880 [2024-11-19 18:28:51.234881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.880 [2024-11-19 18:28:51.234887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.880 [2024-11-19 18:28:51.246641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.880 [2024-11-19 18:28:51.247232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-19 18:28:51.247264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-19 18:28:51.247273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.880 [2024-11-19 18:28:51.247440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.880 [2024-11-19 18:28:51.247596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.880 [2024-11-19 18:28:51.247603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.880 [2024-11-19 18:28:51.247609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.880 [2024-11-19 18:28:51.247615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.880 [2024-11-19 18:28:51.259376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.880 [2024-11-19 18:28:51.259873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-19 18:28:51.259889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-19 18:28:51.259895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.880 [2024-11-19 18:28:51.260046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.880 [2024-11-19 18:28:51.260203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.880 [2024-11-19 18:28:51.260210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.880 [2024-11-19 18:28:51.260216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.880 [2024-11-19 18:28:51.260225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.880 [2024-11-19 18:28:51.271988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.880 [2024-11-19 18:28:51.272535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-19 18:28:51.272567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-19 18:28:51.272575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.880 [2024-11-19 18:28:51.272743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.880 [2024-11-19 18:28:51.272897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.880 [2024-11-19 18:28:51.272905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.880 [2024-11-19 18:28:51.272910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.880 [2024-11-19 18:28:51.272916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.880 [2024-11-19 18:28:51.284690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.880 [2024-11-19 18:28:51.285260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-19 18:28:51.285291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-19 18:28:51.285300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.880 [2024-11-19 18:28:51.285469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.880 [2024-11-19 18:28:51.285623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.880 [2024-11-19 18:28:51.285630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.880 [2024-11-19 18:28:51.285637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.880 [2024-11-19 18:28:51.285643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.880 [2024-11-19 18:28:51.297413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.880 [2024-11-19 18:28:51.297898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-19 18:28:51.297930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-19 18:28:51.297939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.880 [2024-11-19 18:28:51.298107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.880 [2024-11-19 18:28:51.298269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.880 [2024-11-19 18:28:51.298276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.880 [2024-11-19 18:28:51.298282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.880 [2024-11-19 18:28:51.298288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.880 [2024-11-19 18:28:51.310050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.880 [2024-11-19 18:28:51.310638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-19 18:28:51.310670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-19 18:28:51.310679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.880 [2024-11-19 18:28:51.310846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.880 [2024-11-19 18:28:51.311000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.880 [2024-11-19 18:28:51.311007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.880 [2024-11-19 18:28:51.311013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.880 [2024-11-19 18:28:51.311019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.880 [2024-11-19 18:28:51.322780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.880 [2024-11-19 18:28:51.323235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-19 18:28:51.323250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-19 18:28:51.323256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.880 [2024-11-19 18:28:51.323408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.880 [2024-11-19 18:28:51.323560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.880 [2024-11-19 18:28:51.323567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.880 [2024-11-19 18:28:51.323572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.880 [2024-11-19 18:28:51.323577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:49.880 [2024-11-19 18:28:51.335467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:49.880 [2024-11-19 18:28:51.336051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.880 [2024-11-19 18:28:51.336082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:49.880 [2024-11-19 18:28:51.336091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:49.880 [2024-11-19 18:28:51.336266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:49.880 [2024-11-19 18:28:51.336421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:49.880 [2024-11-19 18:28:51.336428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:49.880 [2024-11-19 18:28:51.336434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:49.880 [2024-11-19 18:28:51.336439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.142 [2024-11-19 18:28:51.348193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.142 [2024-11-19 18:28:51.348747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-19 18:28:51.348778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-19 18:28:51.348787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.142 [2024-11-19 18:28:51.348958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.142 [2024-11-19 18:28:51.349112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.142 [2024-11-19 18:28:51.349120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.142 [2024-11-19 18:28:51.349125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.142 [2024-11-19 18:28:51.349132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.142 [2024-11-19 18:28:51.360887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.142 [2024-11-19 18:28:51.361306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-19 18:28:51.361338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-19 18:28:51.361347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.142 [2024-11-19 18:28:51.361516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.142 [2024-11-19 18:28:51.361671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.142 [2024-11-19 18:28:51.361678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.142 [2024-11-19 18:28:51.361684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.142 [2024-11-19 18:28:51.361690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.142 [2024-11-19 18:28:51.373593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.142 [2024-11-19 18:28:51.374130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-19 18:28:51.374167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-19 18:28:51.374175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.142 [2024-11-19 18:28:51.374343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.142 [2024-11-19 18:28:51.374498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.142 [2024-11-19 18:28:51.374505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.142 [2024-11-19 18:28:51.374511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.142 [2024-11-19 18:28:51.374517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.142 [2024-11-19 18:28:51.386273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.142 [2024-11-19 18:28:51.386870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-19 18:28:51.386902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-19 18:28:51.386911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.142 [2024-11-19 18:28:51.387078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.142 [2024-11-19 18:28:51.387241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.142 [2024-11-19 18:28:51.387253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.142 [2024-11-19 18:28:51.387258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.142 [2024-11-19 18:28:51.387264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.142 [2024-11-19 18:28:51.398888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.142 [2024-11-19 18:28:51.399291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-19 18:28:51.399323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-19 18:28:51.399331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.142 [2024-11-19 18:28:51.399501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.142 [2024-11-19 18:28:51.399655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.142 [2024-11-19 18:28:51.399662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.142 [2024-11-19 18:28:51.399669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.142 [2024-11-19 18:28:51.399675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.142 [2024-11-19 18:28:51.411591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.142 [2024-11-19 18:28:51.412059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.142 [2024-11-19 18:28:51.412075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.142 [2024-11-19 18:28:51.412080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.142 [2024-11-19 18:28:51.412237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.142 [2024-11-19 18:28:51.412389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.142 [2024-11-19 18:28:51.412396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.142 [2024-11-19 18:28:51.412402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.142 [2024-11-19 18:28:51.412408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.142 [2024-11-19 18:28:51.424299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.142 [2024-11-19 18:28:51.424893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.424924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.424933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.425100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.425265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.425274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.425280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.425290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.437048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.437647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.437680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.437688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.437855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.438010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.438017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.438023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.438029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.449785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.450273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.450304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.450314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.450483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.450637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.450645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.450651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.450657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.462413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.462960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.462991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.463000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.463174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.463329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.463336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.463343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.463349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.475098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.475665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.475697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.475707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.475874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.476030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.476038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.476044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.476050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.487807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.488350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.488382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.488391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.488559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.488714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.488721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.488726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.488732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.500494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.501092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.501124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.501133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.501307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.501463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.501470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.501476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.501483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.513104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.513680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.513712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.513721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.513891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.514046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.514053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.514059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.514065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.525814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.526398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.526429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.526438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.526605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.526759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.526766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.526773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.526779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.538477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.539051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.539083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.539092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.539267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.539422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.143 [2024-11-19 18:28:51.539429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.143 [2024-11-19 18:28:51.539435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.143 [2024-11-19 18:28:51.539442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.143 [2024-11-19 18:28:51.551188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.143 [2024-11-19 18:28:51.551781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.143 [2024-11-19 18:28:51.551813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.143 [2024-11-19 18:28:51.551822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.143 [2024-11-19 18:28:51.551989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.143 [2024-11-19 18:28:51.552144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.144 [2024-11-19 18:28:51.552155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.144 [2024-11-19 18:28:51.552168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.144 [2024-11-19 18:28:51.552174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.144 [2024-11-19 18:28:51.563919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.144 [2024-11-19 18:28:51.564496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.144 [2024-11-19 18:28:51.564528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.144 [2024-11-19 18:28:51.564537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.144 [2024-11-19 18:28:51.564704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.144 [2024-11-19 18:28:51.564858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.144 [2024-11-19 18:28:51.564865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.144 [2024-11-19 18:28:51.564871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.144 [2024-11-19 18:28:51.564877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.144 [2024-11-19 18:28:51.576635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.144 [2024-11-19 18:28:51.577245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.144 [2024-11-19 18:28:51.577282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.144 [2024-11-19 18:28:51.577290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.144 [2024-11-19 18:28:51.577460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.144 [2024-11-19 18:28:51.577615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.144 [2024-11-19 18:28:51.577622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.144 [2024-11-19 18:28:51.577628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.144 [2024-11-19 18:28:51.577634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.144 [2024-11-19 18:28:51.589251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.144 [2024-11-19 18:28:51.589845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.144 [2024-11-19 18:28:51.589877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.144 [2024-11-19 18:28:51.589885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.144 [2024-11-19 18:28:51.590052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.144 [2024-11-19 18:28:51.590212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.144 [2024-11-19 18:28:51.590221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.144 [2024-11-19 18:28:51.590226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.144 [2024-11-19 18:28:51.590236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.144 [2024-11-19 18:28:51.601994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.144 [2024-11-19 18:28:51.602535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.144 [2024-11-19 18:28:51.602567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.144 [2024-11-19 18:28:51.602576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.144 [2024-11-19 18:28:51.602742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.144 [2024-11-19 18:28:51.602897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.144 [2024-11-19 18:28:51.602904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.144 [2024-11-19 18:28:51.602910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.144 [2024-11-19 18:28:51.602916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.405 [2024-11-19 18:28:51.614680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.405 [2024-11-19 18:28:51.615039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-19 18:28:51.615055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-19 18:28:51.615061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.405 [2024-11-19 18:28:51.615218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.405 [2024-11-19 18:28:51.615371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.405 [2024-11-19 18:28:51.615378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.405 [2024-11-19 18:28:51.615383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.405 [2024-11-19 18:28:51.615389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.405 [2024-11-19 18:28:51.627416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.405 [2024-11-19 18:28:51.627870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-19 18:28:51.627883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-19 18:28:51.627889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.405 [2024-11-19 18:28:51.628039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.405 [2024-11-19 18:28:51.628195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.405 [2024-11-19 18:28:51.628203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.405 [2024-11-19 18:28:51.628209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.405 [2024-11-19 18:28:51.628214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.405 [2024-11-19 18:28:51.640095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.405 [2024-11-19 18:28:51.640570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-19 18:28:51.640583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-19 18:28:51.640588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.405 [2024-11-19 18:28:51.640739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.405 [2024-11-19 18:28:51.640890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.405 [2024-11-19 18:28:51.640896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.405 [2024-11-19 18:28:51.640902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.405 [2024-11-19 18:28:51.640906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.405 [2024-11-19 18:28:51.652791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.405 [2024-11-19 18:28:51.653379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-19 18:28:51.653410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.405 [2024-11-19 18:28:51.653420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.405 [2024-11-19 18:28:51.653587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.405 [2024-11-19 18:28:51.653742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.405 [2024-11-19 18:28:51.653749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.405 [2024-11-19 18:28:51.653755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.405 [2024-11-19 18:28:51.653761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.405 [2024-11-19 18:28:51.665516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.405 [2024-11-19 18:28:51.666110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-19 18:28:51.666142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.666151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.666325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.666480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.666487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.666493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.666499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 [2024-11-19 18:28:51.678182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.678796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-19 18:28:51.678828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.678837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.679007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.679168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.679176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.679182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.679188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 [2024-11-19 18:28:51.690798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.691303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-19 18:28:51.691335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.691344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.691514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.691668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.691675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.691682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.691688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 [2024-11-19 18:28:51.703447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.704005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-19 18:28:51.704036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.704044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.704218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.704373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.704380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.704386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.704392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 [2024-11-19 18:28:51.716144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.716702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-19 18:28:51.716734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.716743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.716910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.717064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.717075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.717080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.717086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 [2024-11-19 18:28:51.728841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.729460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-19 18:28:51.729492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.729500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.729668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.729823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.729830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.729836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.729842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 [2024-11-19 18:28:51.741451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.742027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-19 18:28:51.742059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.742068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.742242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.742397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.742404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.742410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.742415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 [2024-11-19 18:28:51.754162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.754770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-19 18:28:51.754802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.754811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.754978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.755132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.755139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.755145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.755165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 [2024-11-19 18:28:51.766777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.767285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-19 18:28:51.767316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.767325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.767495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.767649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.767656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.767662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.767668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 [2024-11-19 18:28:51.779426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.779921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-19 18:28:51.779937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.406 [2024-11-19 18:28:51.779943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.406 [2024-11-19 18:28:51.780094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.406 [2024-11-19 18:28:51.780255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.406 [2024-11-19 18:28:51.780262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.406 [2024-11-19 18:28:51.780267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.406 [2024-11-19 18:28:51.780273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.406 6049.20 IOPS, 23.63 MiB/s [2024-11-19T17:28:51.877Z] [2024-11-19 18:28:51.792151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.406 [2024-11-19 18:28:51.792748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-19 18:28:51.792780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.407 [2024-11-19 18:28:51.792789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.407 [2024-11-19 18:28:51.792956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.407 [2024-11-19 18:28:51.793110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.407 [2024-11-19 18:28:51.793117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.407 [2024-11-19 18:28:51.793124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.407 [2024-11-19 18:28:51.793130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.407 [2024-11-19 18:28:51.804891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.407 [2024-11-19 18:28:51.805449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-19 18:28:51.805481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.407 [2024-11-19 18:28:51.805490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.407 [2024-11-19 18:28:51.805657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.407 [2024-11-19 18:28:51.805811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.407 [2024-11-19 18:28:51.805818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.407 [2024-11-19 18:28:51.805824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.407 [2024-11-19 18:28:51.805830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.407 [2024-11-19 18:28:51.817593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.407 [2024-11-19 18:28:51.818188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-19 18:28:51.818220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.407 [2024-11-19 18:28:51.818229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.407 [2024-11-19 18:28:51.818397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.407 [2024-11-19 18:28:51.818552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.407 [2024-11-19 18:28:51.818559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.407 [2024-11-19 18:28:51.818565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.407 [2024-11-19 18:28:51.818571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.407 [2024-11-19 18:28:51.830328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.407 [2024-11-19 18:28:51.830924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-19 18:28:51.830955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.407 [2024-11-19 18:28:51.830963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.407 [2024-11-19 18:28:51.831130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.407 [2024-11-19 18:28:51.831292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.407 [2024-11-19 18:28:51.831300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.407 [2024-11-19 18:28:51.831305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.407 [2024-11-19 18:28:51.831311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.407 [2024-11-19 18:28:51.843067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.407 [2024-11-19 18:28:51.843554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-19 18:28:51.843586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.407 [2024-11-19 18:28:51.843599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.407 [2024-11-19 18:28:51.843767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.407 [2024-11-19 18:28:51.843922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.407 [2024-11-19 18:28:51.843929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.407 [2024-11-19 18:28:51.843935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.407 [2024-11-19 18:28:51.843940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.407 [2024-11-19 18:28:51.855691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.407 [2024-11-19 18:28:51.856244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-19 18:28:51.856276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.407 [2024-11-19 18:28:51.856285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.407 [2024-11-19 18:28:51.856454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.407 [2024-11-19 18:28:51.856609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.407 [2024-11-19 18:28:51.856616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.407 [2024-11-19 18:28:51.856622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.407 [2024-11-19 18:28:51.856627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.407 [2024-11-19 18:28:51.868393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.407 [2024-11-19 18:28:51.868991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-19 18:28:51.869023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.407 [2024-11-19 18:28:51.869031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.407 [2024-11-19 18:28:51.869206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.407 [2024-11-19 18:28:51.869361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.407 [2024-11-19 18:28:51.869369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.407 [2024-11-19 18:28:51.869374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.407 [2024-11-19 18:28:51.869380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.669 [2024-11-19 18:28:51.881132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.669 [2024-11-19 18:28:51.881712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-19 18:28:51.881743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-19 18:28:51.881752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.669 [2024-11-19 18:28:51.881920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.669 [2024-11-19 18:28:51.882075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.669 [2024-11-19 18:28:51.882085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.669 [2024-11-19 18:28:51.882091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.669 [2024-11-19 18:28:51.882097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.669 [2024-11-19 18:28:51.893857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.669 [2024-11-19 18:28:51.894449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-19 18:28:51.894480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-19 18:28:51.894489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.669 [2024-11-19 18:28:51.894656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.669 [2024-11-19 18:28:51.894811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.669 [2024-11-19 18:28:51.894818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.669 [2024-11-19 18:28:51.894824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.669 [2024-11-19 18:28:51.894830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.669 [2024-11-19 18:28:51.906593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.669 [2024-11-19 18:28:51.907146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-19 18:28:51.907183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-19 18:28:51.907192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.669 [2024-11-19 18:28:51.907359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.669 [2024-11-19 18:28:51.907514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.669 [2024-11-19 18:28:51.907521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.669 [2024-11-19 18:28:51.907527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.669 [2024-11-19 18:28:51.907533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.669 [2024-11-19 18:28:51.919295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.669 [2024-11-19 18:28:51.919814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-19 18:28:51.919846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-19 18:28:51.919854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.669 [2024-11-19 18:28:51.920022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.669 [2024-11-19 18:28:51.920185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.669 [2024-11-19 18:28:51.920193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.669 [2024-11-19 18:28:51.920199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.669 [2024-11-19 18:28:51.920209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.669 [2024-11-19 18:28:51.931964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.669 [2024-11-19 18:28:51.932424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-19 18:28:51.932454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-19 18:28:51.932463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.669 [2024-11-19 18:28:51.932630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.669 [2024-11-19 18:28:51.932785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.669 [2024-11-19 18:28:51.932792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.669 [2024-11-19 18:28:51.932797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.669 [2024-11-19 18:28:51.932803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.669 [2024-11-19 18:28:51.944698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.669 [2024-11-19 18:28:51.945289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-19 18:28:51.945320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-19 18:28:51.945329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.669 [2024-11-19 18:28:51.945496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.669 [2024-11-19 18:28:51.945651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.669 [2024-11-19 18:28:51.945657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.669 [2024-11-19 18:28:51.945663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.669 [2024-11-19 18:28:51.945670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.669 [2024-11-19 18:28:51.957425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.669 [2024-11-19 18:28:51.958016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-19 18:28:51.958047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-19 18:28:51.958056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.669 [2024-11-19 18:28:51.958230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.669 [2024-11-19 18:28:51.958386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.669 [2024-11-19 18:28:51.958393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.669 [2024-11-19 18:28:51.958399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.669 [2024-11-19 18:28:51.958404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.669 [2024-11-19 18:28:51.970150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.669 [2024-11-19 18:28:51.970758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.669 [2024-11-19 18:28:51.970789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.669 [2024-11-19 18:28:51.970798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.669 [2024-11-19 18:28:51.970966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.669 [2024-11-19 18:28:51.971120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.669 [2024-11-19 18:28:51.971127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.669 [2024-11-19 18:28:51.971133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.669 [2024-11-19 18:28:51.971138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.669 [2024-11-19 18:28:51.982895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:51.983507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:51.983539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:51.983547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:51.983715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.670 [2024-11-19 18:28:51.983869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.670 [2024-11-19 18:28:51.983876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.670 [2024-11-19 18:28:51.983882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.670 [2024-11-19 18:28:51.983888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.670 [2024-11-19 18:28:51.995648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:51.996234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:51.996266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:51.996275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:51.996452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.670 [2024-11-19 18:28:51.996607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.670 [2024-11-19 18:28:51.996614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.670 [2024-11-19 18:28:51.996620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.670 [2024-11-19 18:28:51.996626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.670 [2024-11-19 18:28:52.008384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:52.008967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:52.008998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:52.009011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:52.009189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.670 [2024-11-19 18:28:52.009345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.670 [2024-11-19 18:28:52.009352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.670 [2024-11-19 18:28:52.009358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.670 [2024-11-19 18:28:52.009365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.670 [2024-11-19 18:28:52.021120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:52.021591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:52.021606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:52.021612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:52.021763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.670 [2024-11-19 18:28:52.021915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.670 [2024-11-19 18:28:52.021921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.670 [2024-11-19 18:28:52.021927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.670 [2024-11-19 18:28:52.021931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.670 [2024-11-19 18:28:52.033824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:52.034313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:52.034327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:52.034333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:52.034484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.670 [2024-11-19 18:28:52.034635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.670 [2024-11-19 18:28:52.034642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.670 [2024-11-19 18:28:52.034647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.670 [2024-11-19 18:28:52.034652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.670 [2024-11-19 18:28:52.046539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:52.046903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:52.046915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:52.046921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:52.047072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.670 [2024-11-19 18:28:52.047228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.670 [2024-11-19 18:28:52.047238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.670 [2024-11-19 18:28:52.047244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.670 [2024-11-19 18:28:52.047249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.670 [2024-11-19 18:28:52.059308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:52.059852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:52.059883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:52.059892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:52.060059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.670 [2024-11-19 18:28:52.060220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.670 [2024-11-19 18:28:52.060228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.670 [2024-11-19 18:28:52.060234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.670 [2024-11-19 18:28:52.060240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.670 [2024-11-19 18:28:52.071993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:52.072472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:52.072504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:52.072513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:52.072679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.670 [2024-11-19 18:28:52.072834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.670 [2024-11-19 18:28:52.072840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.670 [2024-11-19 18:28:52.072847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.670 [2024-11-19 18:28:52.072853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.670 [2024-11-19 18:28:52.084655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:52.085152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:52.085172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:52.085178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:52.085329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.670 [2024-11-19 18:28:52.085481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.670 [2024-11-19 18:28:52.085487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.670 [2024-11-19 18:28:52.085493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.670 [2024-11-19 18:28:52.085502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.670 [2024-11-19 18:28:52.097400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.670 [2024-11-19 18:28:52.097894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.670 [2024-11-19 18:28:52.097908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.670 [2024-11-19 18:28:52.097914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.670 [2024-11-19 18:28:52.098065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.671 [2024-11-19 18:28:52.098224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.671 [2024-11-19 18:28:52.098231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.671 [2024-11-19 18:28:52.098237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.671 [2024-11-19 18:28:52.098241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.671 [2024-11-19 18:28:52.110125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.671 [2024-11-19 18:28:52.110690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.671 [2024-11-19 18:28:52.110722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.671 [2024-11-19 18:28:52.110730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.671 [2024-11-19 18:28:52.110897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.671 [2024-11-19 18:28:52.111052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.671 [2024-11-19 18:28:52.111059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.671 [2024-11-19 18:28:52.111066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.671 [2024-11-19 18:28:52.111072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.671 [2024-11-19 18:28:52.122828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.671 [2024-11-19 18:28:52.123318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.671 [2024-11-19 18:28:52.123333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.671 [2024-11-19 18:28:52.123339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.671 [2024-11-19 18:28:52.123491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.671 [2024-11-19 18:28:52.123643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.671 [2024-11-19 18:28:52.123650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.671 [2024-11-19 18:28:52.123655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.671 [2024-11-19 18:28:52.123660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.671 [2024-11-19 18:28:52.135552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.932 [2024-11-19 18:28:52.136022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.932 [2024-11-19 18:28:52.136037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.932 [2024-11-19 18:28:52.136043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.932 [2024-11-19 18:28:52.136197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.932 [2024-11-19 18:28:52.136350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.932 [2024-11-19 18:28:52.136357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.932 [2024-11-19 18:28:52.136363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.932 [2024-11-19 18:28:52.136368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.932 [2024-11-19 18:28:52.148262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.932 [2024-11-19 18:28:52.148703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.148716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.148721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.148872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.149023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.149030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.149035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.933 [2024-11-19 18:28:52.149040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.933 [2024-11-19 18:28:52.160931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.933 [2024-11-19 18:28:52.161436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.161468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.161477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.161646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.161800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.161807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.161813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.933 [2024-11-19 18:28:52.161819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.933 [2024-11-19 18:28:52.173584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.933 [2024-11-19 18:28:52.174266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.174298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.174311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.174478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.174633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.174640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.174646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.933 [2024-11-19 18:28:52.174651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.933 [2024-11-19 18:28:52.186279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.933 [2024-11-19 18:28:52.186840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.186871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.186880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.187046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.187207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.187215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.187220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.933 [2024-11-19 18:28:52.187226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.933 [2024-11-19 18:28:52.198992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.933 [2024-11-19 18:28:52.199369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.199384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.199391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.199543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.199694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.199702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.199707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.933 [2024-11-19 18:28:52.199713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.933 [2024-11-19 18:28:52.211617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.933 [2024-11-19 18:28:52.212204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.212236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.212245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.212414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.212569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.212580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.212585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.933 [2024-11-19 18:28:52.212592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.933 [2024-11-19 18:28:52.224359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.933 [2024-11-19 18:28:52.224940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.224971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.224980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.225148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.225307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.225315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.225321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.933 [2024-11-19 18:28:52.225327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.933 [2024-11-19 18:28:52.237083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.933 [2024-11-19 18:28:52.237541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.237557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.237564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.237716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.237868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.237875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.237880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.933 [2024-11-19 18:28:52.237885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.933 [2024-11-19 18:28:52.249776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.933 [2024-11-19 18:28:52.250108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.250121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.250127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.250281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.250433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.250440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.250445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.933 [2024-11-19 18:28:52.250453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.933 [2024-11-19 18:28:52.262489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.933 [2024-11-19 18:28:52.262964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.933 [2024-11-19 18:28:52.262996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.933 [2024-11-19 18:28:52.263005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.933 [2024-11-19 18:28:52.263181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.933 [2024-11-19 18:28:52.263336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.933 [2024-11-19 18:28:52.263344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.933 [2024-11-19 18:28:52.263350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.934 [2024-11-19 18:28:52.263356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.934 [2024-11-19 18:28:52.275113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.934 [2024-11-19 18:28:52.275499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.934 [2024-11-19 18:28:52.275515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.934 [2024-11-19 18:28:52.275521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.934 [2024-11-19 18:28:52.275672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.934 [2024-11-19 18:28:52.275823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.934 [2024-11-19 18:28:52.275830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.934 [2024-11-19 18:28:52.275835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.934 [2024-11-19 18:28:52.275840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.934 [2024-11-19 18:28:52.287737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.934 [2024-11-19 18:28:52.288229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.934 [2024-11-19 18:28:52.288243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.934 [2024-11-19 18:28:52.288248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.934 [2024-11-19 18:28:52.288399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.934 [2024-11-19 18:28:52.288551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.934 [2024-11-19 18:28:52.288557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.934 [2024-11-19 18:28:52.288563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.934 [2024-11-19 18:28:52.288568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.934 [2024-11-19 18:28:52.300467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.934 [2024-11-19 18:28:52.300963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.934 [2024-11-19 18:28:52.300976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.934 [2024-11-19 18:28:52.300982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.934 [2024-11-19 18:28:52.301132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.934 [2024-11-19 18:28:52.301289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.934 [2024-11-19 18:28:52.301296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.934 [2024-11-19 18:28:52.301302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.934 [2024-11-19 18:28:52.301307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.934 [2024-11-19 18:28:52.313206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.934 [2024-11-19 18:28:52.313660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.934 [2024-11-19 18:28:52.313673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.934 [2024-11-19 18:28:52.313679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.934 [2024-11-19 18:28:52.313829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.934 [2024-11-19 18:28:52.313981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.934 [2024-11-19 18:28:52.313987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.934 [2024-11-19 18:28:52.313992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.934 [2024-11-19 18:28:52.313997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.934 [2024-11-19 18:28:52.325884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.934 [2024-11-19 18:28:52.326424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.934 [2024-11-19 18:28:52.326437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.934 [2024-11-19 18:28:52.326443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.934 [2024-11-19 18:28:52.326593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.934 [2024-11-19 18:28:52.326745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.934 [2024-11-19 18:28:52.326751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.934 [2024-11-19 18:28:52.326757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.934 [2024-11-19 18:28:52.326762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.934 [2024-11-19 18:28:52.338509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.934 [2024-11-19 18:28:52.338995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.934 [2024-11-19 18:28:52.339008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.934 [2024-11-19 18:28:52.339014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.934 [2024-11-19 18:28:52.339172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.934 [2024-11-19 18:28:52.339323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.934 [2024-11-19 18:28:52.339330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.934 [2024-11-19 18:28:52.339336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.934 [2024-11-19 18:28:52.339341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.934 [2024-11-19 18:28:52.351225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.934 [2024-11-19 18:28:52.351687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.934 [2024-11-19 18:28:52.351700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.934 [2024-11-19 18:28:52.351706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.934 [2024-11-19 18:28:52.351857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.934 [2024-11-19 18:28:52.352008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.934 [2024-11-19 18:28:52.352014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.934 [2024-11-19 18:28:52.352019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.934 [2024-11-19 18:28:52.352024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.934 [2024-11-19 18:28:52.363913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.934 [2024-11-19 18:28:52.364537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.934 [2024-11-19 18:28:52.364569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.934 [2024-11-19 18:28:52.364577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.934 [2024-11-19 18:28:52.364744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.934 [2024-11-19 18:28:52.364899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.934 [2024-11-19 18:28:52.364906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.934 [2024-11-19 18:28:52.364912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.934 [2024-11-19 18:28:52.364918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.934 [2024-11-19 18:28:52.376531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.934 [2024-11-19 18:28:52.376966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.934 [2024-11-19 18:28:52.376982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.934 [2024-11-19 18:28:52.376988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.934 [2024-11-19 18:28:52.377139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.934 [2024-11-19 18:28:52.377296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.935 [2024-11-19 18:28:52.377310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.935 [2024-11-19 18:28:52.377315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.935 [2024-11-19 18:28:52.377320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:50.935 [2024-11-19 18:28:52.389215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:50.935 [2024-11-19 18:28:52.389782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.935 [2024-11-19 18:28:52.389814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:50.935 [2024-11-19 18:28:52.389823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:50.935 [2024-11-19 18:28:52.389990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:50.935 [2024-11-19 18:28:52.390144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:50.935 [2024-11-19 18:28:52.390151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:50.935 [2024-11-19 18:28:52.390164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:50.935 [2024-11-19 18:28:52.390171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.197 [2024-11-19 18:28:52.401935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.197 [2024-11-19 18:28:52.402416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.197 [2024-11-19 18:28:52.402432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.197 [2024-11-19 18:28:52.402439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.197 [2024-11-19 18:28:52.402590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.197 [2024-11-19 18:28:52.402741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.197 [2024-11-19 18:28:52.402748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.197 [2024-11-19 18:28:52.402753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.197 [2024-11-19 18:28:52.402758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.197 [2024-11-19 18:28:52.414659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.197 [2024-11-19 18:28:52.415140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.197 [2024-11-19 18:28:52.415176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.197 [2024-11-19 18:28:52.415186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.197 [2024-11-19 18:28:52.415355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.197 [2024-11-19 18:28:52.415510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.197 [2024-11-19 18:28:52.415517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.197 [2024-11-19 18:28:52.415523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.197 [2024-11-19 18:28:52.415533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.197 [2024-11-19 18:28:52.427290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.197 [2024-11-19 18:28:52.427748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.197 [2024-11-19 18:28:52.427763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.197 [2024-11-19 18:28:52.427769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.197 [2024-11-19 18:28:52.427920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.197 [2024-11-19 18:28:52.428072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.197 [2024-11-19 18:28:52.428079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.197 [2024-11-19 18:28:52.428084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.197 [2024-11-19 18:28:52.428089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.197 [2024-11-19 18:28:52.439995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.197 [2024-11-19 18:28:52.440509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.197 [2024-11-19 18:28:52.440523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.197 [2024-11-19 18:28:52.440529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.197 [2024-11-19 18:28:52.440679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.197 [2024-11-19 18:28:52.440831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.197 [2024-11-19 18:28:52.440838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.197 [2024-11-19 18:28:52.440844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.197 [2024-11-19 18:28:52.440849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.197 [2024-11-19 18:28:52.452626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.197 [2024-11-19 18:28:52.453108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.197 [2024-11-19 18:28:52.453122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.197 [2024-11-19 18:28:52.453127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.197 [2024-11-19 18:28:52.453281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.197 [2024-11-19 18:28:52.453433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.197 [2024-11-19 18:28:52.453439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.197 [2024-11-19 18:28:52.453445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.197 [2024-11-19 18:28:52.453450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2169964 Killed "${NVMF_APP[@]}" "$@" 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.197 [2024-11-19 18:28:52.465341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.197 [2024-11-19 18:28:52.465823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.197 [2024-11-19 18:28:52.465836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.197 [2024-11-19 18:28:52.465842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.197 [2024-11-19 18:28:52.465992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.197 [2024-11-19 18:28:52.466144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.197 [2024-11-19 18:28:52.466151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.197 [2024-11-19 18:28:52.466157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.197 [2024-11-19 18:28:52.466166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2171557 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2171557 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2171557 ']' 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.197 18:28:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.197 [2024-11-19 18:28:52.478064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.197 [2024-11-19 18:28:52.478559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.197 [2024-11-19 18:28:52.478572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.197 [2024-11-19 18:28:52.478578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.197 [2024-11-19 18:28:52.478731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.197 [2024-11-19 18:28:52.478883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.197 [2024-11-19 18:28:52.478891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.197 [2024-11-19 18:28:52.478898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.198 [2024-11-19 18:28:52.478904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.198 [2024-11-19 18:28:52.490805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.198 [2024-11-19 18:28:52.491311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.198 [2024-11-19 18:28:52.491325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.198 [2024-11-19 18:28:52.491332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.198 [2024-11-19 18:28:52.491482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.198 [2024-11-19 18:28:52.491633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.198 [2024-11-19 18:28:52.491641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.198 [2024-11-19 18:28:52.491646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.198 [2024-11-19 18:28:52.491652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.198 [2024-11-19 18:28:52.503414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.198 [2024-11-19 18:28:52.504014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.198 [2024-11-19 18:28:52.504047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.198 [2024-11-19 18:28:52.504056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.198 [2024-11-19 18:28:52.504230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.198 [2024-11-19 18:28:52.504386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.198 [2024-11-19 18:28:52.504393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.198 [2024-11-19 18:28:52.504399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.198 [2024-11-19 18:28:52.504405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.198 [2024-11-19 18:28:52.516024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.198 [2024-11-19 18:28:52.516615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.198 [2024-11-19 18:28:52.516647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.198 [2024-11-19 18:28:52.516656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.198 [2024-11-19 18:28:52.516823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.198 [2024-11-19 18:28:52.516978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.198 [2024-11-19 18:28:52.516985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.198 [2024-11-19 18:28:52.516991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.198 [2024-11-19 18:28:52.516997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.198 [2024-11-19 18:28:52.523427] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:29:51.198 [2024-11-19 18:28:52.523478] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.198 [2024-11-19 18:28:52.528763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.198 [2024-11-19 18:28:52.529379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.198 [2024-11-19 18:28:52.529411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.198 [2024-11-19 18:28:52.529420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.198 [2024-11-19 18:28:52.529588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.198 [2024-11-19 18:28:52.529742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.198 [2024-11-19 18:28:52.529749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.198 [2024-11-19 18:28:52.529756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.198 [2024-11-19 18:28:52.529763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.198 [2024-11-19 18:28:52.541383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.198 [2024-11-19 18:28:52.541998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.198 [2024-11-19 18:28:52.542030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.198 [2024-11-19 18:28:52.542039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.198 [2024-11-19 18:28:52.542213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.198 [2024-11-19 18:28:52.542369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.198 [2024-11-19 18:28:52.542376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.198 [2024-11-19 18:28:52.542382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.198 [2024-11-19 18:28:52.542388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.198 [2024-11-19 18:28:52.554005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.198 [2024-11-19 18:28:52.554603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.198 [2024-11-19 18:28:52.554634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.198 [2024-11-19 18:28:52.554644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.198 [2024-11-19 18:28:52.554811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.198 [2024-11-19 18:28:52.554965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.198 [2024-11-19 18:28:52.554972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.198 [2024-11-19 18:28:52.554979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.198 [2024-11-19 18:28:52.554985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.198 [2024-11-19 18:28:52.566693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.198 [2024-11-19 18:28:52.567169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.198 [2024-11-19 18:28:52.567190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.198 [2024-11-19 18:28:52.567196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.198 [2024-11-19 18:28:52.567348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.198 [2024-11-19 18:28:52.567499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.198 [2024-11-19 18:28:52.567506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.198 [2024-11-19 18:28:52.567511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.198 [2024-11-19 18:28:52.567517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.198 [2024-11-19 18:28:52.579411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.198 [2024-11-19 18:28:52.579851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.198 [2024-11-19 18:28:52.579864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.198 [2024-11-19 18:28:52.579870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.198 [2024-11-19 18:28:52.580021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.198 [2024-11-19 18:28:52.580177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.198 [2024-11-19 18:28:52.580184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.198 [2024-11-19 18:28:52.580190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.198 [2024-11-19 18:28:52.580195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.198 [2024-11-19 18:28:52.592092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.198 [2024-11-19 18:28:52.592881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.198 [2024-11-19 18:28:52.592900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.199 [2024-11-19 18:28:52.592907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.199 [2024-11-19 18:28:52.593065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.199 [2024-11-19 18:28:52.593224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.199 [2024-11-19 18:28:52.593231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.199 [2024-11-19 18:28:52.593237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.199 [2024-11-19 18:28:52.593242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.199 [2024-11-19 18:28:52.604722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.199 [2024-11-19 18:28:52.605217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.199 [2024-11-19 18:28:52.605232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.199 [2024-11-19 18:28:52.605238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.199 [2024-11-19 18:28:52.605393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.199 [2024-11-19 18:28:52.605545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.199 [2024-11-19 18:28:52.605551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.199 [2024-11-19 18:28:52.605557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.199 [2024-11-19 18:28:52.605562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.199 [2024-11-19 18:28:52.615856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:51.199 [2024-11-19 18:28:52.617343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.199 [2024-11-19 18:28:52.617808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.199 [2024-11-19 18:28:52.617822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.199 [2024-11-19 18:28:52.617828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.199 [2024-11-19 18:28:52.617980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.199 [2024-11-19 18:28:52.618132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.199 [2024-11-19 18:28:52.618140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.199 [2024-11-19 18:28:52.618145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.199 [2024-11-19 18:28:52.618150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.199 [2024-11-19 18:28:52.630060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.199 [2024-11-19 18:28:52.630533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.199 [2024-11-19 18:28:52.630548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.199 [2024-11-19 18:28:52.630554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.199 [2024-11-19 18:28:52.630705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.199 [2024-11-19 18:28:52.630857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.199 [2024-11-19 18:28:52.630864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.199 [2024-11-19 18:28:52.630869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.199 [2024-11-19 18:28:52.630874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.199 [2024-11-19 18:28:52.642780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.199 [2024-11-19 18:28:52.643276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.199 [2024-11-19 18:28:52.643290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.199 [2024-11-19 18:28:52.643296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.199 [2024-11-19 18:28:52.643447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.199 [2024-11-19 18:28:52.643598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.199 [2024-11-19 18:28:52.643610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.199 [2024-11-19 18:28:52.643615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.199 [2024-11-19 18:28:52.643621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.199 [2024-11-19 18:28:52.645304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.199 [2024-11-19 18:28:52.645331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.199 [2024-11-19 18:28:52.645337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.199 [2024-11-19 18:28:52.645342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.199 [2024-11-19 18:28:52.645346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.199 [2024-11-19 18:28:52.646458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.199 [2024-11-19 18:28:52.646608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.199 [2024-11-19 18:28:52.646610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.199 [2024-11-19 18:28:52.655521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.199 [2024-11-19 18:28:52.655952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.199 [2024-11-19 18:28:52.655966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.199 [2024-11-19 18:28:52.655972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.199 [2024-11-19 18:28:52.656124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.199 [2024-11-19 18:28:52.656280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.199 [2024-11-19 18:28:52.656287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.199 [2024-11-19 18:28:52.656293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.199 [2024-11-19 18:28:52.656298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.461 [2024-11-19 18:28:52.668202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.461 [2024-11-19 18:28:52.668713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.461 [2024-11-19 18:28:52.668727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.461 [2024-11-19 18:28:52.668733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.461 [2024-11-19 18:28:52.668885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.461 [2024-11-19 18:28:52.669037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.461 [2024-11-19 18:28:52.669044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.461 [2024-11-19 18:28:52.669050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.461 [2024-11-19 18:28:52.669055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.461 [2024-11-19 18:28:52.680950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.461 [2024-11-19 18:28:52.681412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.461 [2024-11-19 18:28:52.681430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.461 [2024-11-19 18:28:52.681436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.461 [2024-11-19 18:28:52.681587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.461 [2024-11-19 18:28:52.681740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.461 [2024-11-19 18:28:52.681747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.461 [2024-11-19 18:28:52.681753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.461 [2024-11-19 18:28:52.681758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.461 [2024-11-19 18:28:52.693655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.461 [2024-11-19 18:28:52.694162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.461 [2024-11-19 18:28:52.694176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.461 [2024-11-19 18:28:52.694182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.461 [2024-11-19 18:28:52.694333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.461 [2024-11-19 18:28:52.694485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.461 [2024-11-19 18:28:52.694491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.461 [2024-11-19 18:28:52.694497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.461 [2024-11-19 18:28:52.694502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.461 [2024-11-19 18:28:52.706365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.461 [2024-11-19 18:28:52.706864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.461 [2024-11-19 18:28:52.706878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.461 [2024-11-19 18:28:52.706884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.461 [2024-11-19 18:28:52.707034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.461 [2024-11-19 18:28:52.707189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.461 [2024-11-19 18:28:52.707196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.461 [2024-11-19 18:28:52.707202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.461 [2024-11-19 18:28:52.707207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.461 [2024-11-19 18:28:52.719109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.461 [2024-11-19 18:28:52.719604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.461 [2024-11-19 18:28:52.719618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.461 [2024-11-19 18:28:52.719623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.461 [2024-11-19 18:28:52.719777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.461 [2024-11-19 18:28:52.719929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.461 [2024-11-19 18:28:52.719935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.461 [2024-11-19 18:28:52.719941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.462 [2024-11-19 18:28:52.719946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.462 [2024-11-19 18:28:52.731841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.462 [2024-11-19 18:28:52.732302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.462 [2024-11-19 18:28:52.732316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.462 [2024-11-19 18:28:52.732322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.462 [2024-11-19 18:28:52.732473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.462 [2024-11-19 18:28:52.732624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.462 [2024-11-19 18:28:52.732630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.462 [2024-11-19 18:28:52.732635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.462 [2024-11-19 18:28:52.732640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.462 [2024-11-19 18:28:52.744536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.462 [2024-11-19 18:28:52.744988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.462 [2024-11-19 18:28:52.745001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.462 [2024-11-19 18:28:52.745007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.462 [2024-11-19 18:28:52.745162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.462 [2024-11-19 18:28:52.745315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.462 [2024-11-19 18:28:52.745322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.462 [2024-11-19 18:28:52.745327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.462 [2024-11-19 18:28:52.745332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.462 [2024-11-19 18:28:52.757224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.462 [2024-11-19 18:28:52.757629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.462 [2024-11-19 18:28:52.757642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.462 [2024-11-19 18:28:52.757647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.462 [2024-11-19 18:28:52.757798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.462 [2024-11-19 18:28:52.757950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.462 [2024-11-19 18:28:52.757960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.462 [2024-11-19 18:28:52.757965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.462 [2024-11-19 18:28:52.757970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.462 [2024-11-19 18:28:52.769856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.462 [2024-11-19 18:28:52.770306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.462 [2024-11-19 18:28:52.770318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.462 [2024-11-19 18:28:52.770324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.462 [2024-11-19 18:28:52.770475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.462 [2024-11-19 18:28:52.770626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.462 [2024-11-19 18:28:52.770632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.462 [2024-11-19 18:28:52.770638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.462 [2024-11-19 18:28:52.770643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.462 [2024-11-19 18:28:52.782532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.462 [2024-11-19 18:28:52.783043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.462 [2024-11-19 18:28:52.783057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.462 [2024-11-19 18:28:52.783063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.462 [2024-11-19 18:28:52.783218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.462 [2024-11-19 18:28:52.783370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.462 [2024-11-19 18:28:52.783378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.462 [2024-11-19 18:28:52.783384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.462 [2024-11-19 18:28:52.783389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.462 5041.00 IOPS, 19.69 MiB/s [2024-11-19T17:28:52.933Z] [2024-11-19 18:28:52.795274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.462 [2024-11-19 18:28:52.795634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.462 [2024-11-19 18:28:52.795648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.462 [2024-11-19 18:28:52.795654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.462 [2024-11-19 18:28:52.795806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.462 [2024-11-19 18:28:52.795957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.462 [2024-11-19 18:28:52.795964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.462 [2024-11-19 18:28:52.795969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.462 [2024-11-19 18:28:52.795977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.462 [2024-11-19 18:28:52.808023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.462 [2024-11-19 18:28:52.808476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.462 [2024-11-19 18:28:52.808490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.462 [2024-11-19 18:28:52.808496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.462 [2024-11-19 18:28:52.808647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.462 [2024-11-19 18:28:52.808798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.462 [2024-11-19 18:28:52.808805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.462 [2024-11-19 18:28:52.808810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.462 [2024-11-19 18:28:52.808815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.462 [2024-11-19 18:28:52.820719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.462 [2024-11-19 18:28:52.821265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.462 [2024-11-19 18:28:52.821300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.462 [2024-11-19 18:28:52.821309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.462 [2024-11-19 18:28:52.821483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.462 [2024-11-19 18:28:52.821638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.462 [2024-11-19 18:28:52.821646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.462 [2024-11-19 18:28:52.821652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.463 [2024-11-19 18:28:52.821658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.463 [2024-11-19 18:28:52.833464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.463 [2024-11-19 18:28:52.834082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.463 [2024-11-19 18:28:52.834114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.463 [2024-11-19 18:28:52.834123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.463 [2024-11-19 18:28:52.834297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.463 [2024-11-19 18:28:52.834452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.463 [2024-11-19 18:28:52.834459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.463 [2024-11-19 18:28:52.834466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.463 [2024-11-19 18:28:52.834471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.463 [2024-11-19 18:28:52.846075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.463 [2024-11-19 18:28:52.846691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.463 [2024-11-19 18:28:52.846723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.463 [2024-11-19 18:28:52.846732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.463 [2024-11-19 18:28:52.846899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.463 [2024-11-19 18:28:52.847054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.463 [2024-11-19 18:28:52.847061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.463 [2024-11-19 18:28:52.847066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.463 [2024-11-19 18:28:52.847072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.463 [2024-11-19 18:28:52.858824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.463 [2024-11-19 18:28:52.859444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.463 [2024-11-19 18:28:52.859477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.463 [2024-11-19 18:28:52.859486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.463 [2024-11-19 18:28:52.859652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.463 [2024-11-19 18:28:52.859807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.463 [2024-11-19 18:28:52.859814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.463 [2024-11-19 18:28:52.859820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.463 [2024-11-19 18:28:52.859826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.463 [2024-11-19 18:28:52.871460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.463 [2024-11-19 18:28:52.871929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.463 [2024-11-19 18:28:52.871946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.463 [2024-11-19 18:28:52.871952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.463 [2024-11-19 18:28:52.872103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.463 [2024-11-19 18:28:52.872260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.463 [2024-11-19 18:28:52.872267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.463 [2024-11-19 18:28:52.872272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.463 [2024-11-19 18:28:52.872277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.463 [2024-11-19 18:28:52.884161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.463 [2024-11-19 18:28:52.884725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.463 [2024-11-19 18:28:52.884757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.463 [2024-11-19 18:28:52.884766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.463 [2024-11-19 18:28:52.884936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.463 [2024-11-19 18:28:52.885091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.463 [2024-11-19 18:28:52.885098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.463 [2024-11-19 18:28:52.885103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.463 [2024-11-19 18:28:52.885109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.463 [2024-11-19 18:28:52.896865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.463 [2024-11-19 18:28:52.897447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.463 [2024-11-19 18:28:52.897479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.463 [2024-11-19 18:28:52.897487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.463 [2024-11-19 18:28:52.897655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.463 [2024-11-19 18:28:52.897809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.463 [2024-11-19 18:28:52.897816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.463 [2024-11-19 18:28:52.897822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.463 [2024-11-19 18:28:52.897827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.463 [2024-11-19 18:28:52.909588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.463 [2024-11-19 18:28:52.910052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.463 [2024-11-19 18:28:52.910068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.463 [2024-11-19 18:28:52.910074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.463 [2024-11-19 18:28:52.910230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.463 [2024-11-19 18:28:52.910382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.463 [2024-11-19 18:28:52.910388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.463 [2024-11-19 18:28:52.910394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.463 [2024-11-19 18:28:52.910399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.463 [2024-11-19 18:28:52.922294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.463 [2024-11-19 18:28:52.922785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.463 [2024-11-19 18:28:52.922817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.463 [2024-11-19 18:28:52.922826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.463 [2024-11-19 18:28:52.922995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.463 [2024-11-19 18:28:52.923150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.463 [2024-11-19 18:28:52.923167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.463 [2024-11-19 18:28:52.923173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.463 [2024-11-19 18:28:52.923178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.725 [2024-11-19 18:28:52.934929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.725 [2024-11-19 18:28:52.935353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.725 [2024-11-19 18:28:52.935370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.725 [2024-11-19 18:28:52.935375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.725 [2024-11-19 18:28:52.935527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.725 [2024-11-19 18:28:52.935679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.725 [2024-11-19 18:28:52.935686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.725 [2024-11-19 18:28:52.935691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.725 [2024-11-19 18:28:52.935696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.725 [2024-11-19 18:28:52.947590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.725 [2024-11-19 18:28:52.948173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.725 [2024-11-19 18:28:52.948205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.725 [2024-11-19 18:28:52.948213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.725 [2024-11-19 18:28:52.948383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.725 [2024-11-19 18:28:52.948537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.725 [2024-11-19 18:28:52.948544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.725 [2024-11-19 18:28:52.948549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.725 [2024-11-19 18:28:52.948555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.725 [2024-11-19 18:28:52.960305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.725 [2024-11-19 18:28:52.960765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.725 [2024-11-19 18:28:52.960780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.725 [2024-11-19 18:28:52.960786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.725 [2024-11-19 18:28:52.960937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.725 [2024-11-19 18:28:52.961089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.725 [2024-11-19 18:28:52.961096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.725 [2024-11-19 18:28:52.961101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.725 [2024-11-19 18:28:52.961110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.725 [2024-11-19 18:28:52.972996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.725 [2024-11-19 18:28:52.973549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.725 [2024-11-19 18:28:52.973581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.725 [2024-11-19 18:28:52.973590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.725 [2024-11-19 18:28:52.973757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.725 [2024-11-19 18:28:52.973912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.725 [2024-11-19 18:28:52.973919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.725 [2024-11-19 18:28:52.973925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.725 [2024-11-19 18:28:52.973931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.725 [2024-11-19 18:28:52.985681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.725 [2024-11-19 18:28:52.986287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.725 [2024-11-19 18:28:52.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.725 [2024-11-19 18:28:52.986327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.725 [2024-11-19 18:28:52.986495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.725 [2024-11-19 18:28:52.986650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.725 [2024-11-19 18:28:52.986657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:52.986663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:52.986668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:52.998426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:52.998892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.726 [2024-11-19 18:28:52.998907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.726 [2024-11-19 18:28:52.998913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.726 [2024-11-19 18:28:52.999064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.726 [2024-11-19 18:28:52.999226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.726 [2024-11-19 18:28:52.999234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:52.999240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:52.999245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:53.011129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:53.011777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.726 [2024-11-19 18:28:53.011813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.726 [2024-11-19 18:28:53.011821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.726 [2024-11-19 18:28:53.011989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.726 [2024-11-19 18:28:53.012143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.726 [2024-11-19 18:28:53.012150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:53.012156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:53.012167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:53.023831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:53.024307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.726 [2024-11-19 18:28:53.024324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.726 [2024-11-19 18:28:53.024330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.726 [2024-11-19 18:28:53.024482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.726 [2024-11-19 18:28:53.024634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.726 [2024-11-19 18:28:53.024641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:53.024646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:53.024652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:53.036544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:53.036890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.726 [2024-11-19 18:28:53.036903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.726 [2024-11-19 18:28:53.036909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.726 [2024-11-19 18:28:53.037060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.726 [2024-11-19 18:28:53.037216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.726 [2024-11-19 18:28:53.037222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:53.037227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:53.037232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:53.049262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:53.049794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.726 [2024-11-19 18:28:53.049825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.726 [2024-11-19 18:28:53.049834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.726 [2024-11-19 18:28:53.050006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.726 [2024-11-19 18:28:53.050167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.726 [2024-11-19 18:28:53.050175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:53.050181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:53.050187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:53.061988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:53.062421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.726 [2024-11-19 18:28:53.062453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.726 [2024-11-19 18:28:53.062462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.726 [2024-11-19 18:28:53.062631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.726 [2024-11-19 18:28:53.062785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.726 [2024-11-19 18:28:53.062792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:53.062798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:53.062804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:53.074700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:53.075135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.726 [2024-11-19 18:28:53.075171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.726 [2024-11-19 18:28:53.075180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.726 [2024-11-19 18:28:53.075348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.726 [2024-11-19 18:28:53.075502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.726 [2024-11-19 18:28:53.075509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:53.075515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:53.075521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:53.087413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:53.087878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.726 [2024-11-19 18:28:53.087893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.726 [2024-11-19 18:28:53.087899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.726 [2024-11-19 18:28:53.088050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.726 [2024-11-19 18:28:53.088205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.726 [2024-11-19 18:28:53.088215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:53.088221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:53.088226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:53.100118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:53.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.726 [2024-11-19 18:28:53.100585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.726 [2024-11-19 18:28:53.100591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.726 [2024-11-19 18:28:53.100742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.726 [2024-11-19 18:28:53.100893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.726 [2024-11-19 18:28:53.100899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.726 [2024-11-19 18:28:53.100904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.726 [2024-11-19 18:28:53.100910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.726 [2024-11-19 18:28:53.112798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.726 [2024-11-19 18:28:53.113395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.727 [2024-11-19 18:28:53.113427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.727 [2024-11-19 18:28:53.113435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.727 [2024-11-19 18:28:53.113602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.727 [2024-11-19 18:28:53.113757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.727 [2024-11-19 18:28:53.113764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.727 [2024-11-19 18:28:53.113770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.727 [2024-11-19 18:28:53.113776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.727 [2024-11-19 18:28:53.125531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.727 [2024-11-19 18:28:53.126035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.727 [2024-11-19 18:28:53.126067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.727 [2024-11-19 18:28:53.126076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.727 [2024-11-19 18:28:53.126249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.727 [2024-11-19 18:28:53.126404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.727 [2024-11-19 18:28:53.126411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.727 [2024-11-19 18:28:53.126417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.727 [2024-11-19 18:28:53.126426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.727 [2024-11-19 18:28:53.138181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.727 [2024-11-19 18:28:53.138741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.727 [2024-11-19 18:28:53.138773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.727 [2024-11-19 18:28:53.138782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.727 [2024-11-19 18:28:53.138949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.727 [2024-11-19 18:28:53.139104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.727 [2024-11-19 18:28:53.139111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.727 [2024-11-19 18:28:53.139118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.727 [2024-11-19 18:28:53.139124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.727 [2024-11-19 18:28:53.150880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.727 [2024-11-19 18:28:53.151378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.727 [2024-11-19 18:28:53.151394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.727 [2024-11-19 18:28:53.151400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.727 [2024-11-19 18:28:53.151551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.727 [2024-11-19 18:28:53.151703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.727 [2024-11-19 18:28:53.151710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.727 [2024-11-19 18:28:53.151715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.727 [2024-11-19 18:28:53.151720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.727 [2024-11-19 18:28:53.163614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.727 [2024-11-19 18:28:53.164068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.727 [2024-11-19 18:28:53.164082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.727 [2024-11-19 18:28:53.164087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.727 [2024-11-19 18:28:53.164242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.727 [2024-11-19 18:28:53.164394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.727 [2024-11-19 18:28:53.164400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.727 [2024-11-19 18:28:53.164405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.727 [2024-11-19 18:28:53.164410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.727 [2024-11-19 18:28:53.176300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.727 [2024-11-19 18:28:53.176675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.727 [2024-11-19 18:28:53.176694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.727 [2024-11-19 18:28:53.176700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.727 [2024-11-19 18:28:53.176851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.727 [2024-11-19 18:28:53.177002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.727 [2024-11-19 18:28:53.177009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.727 [2024-11-19 18:28:53.177014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.727 [2024-11-19 18:28:53.177020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.727 [2024-11-19 18:28:53.188909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.727 [2024-11-19 18:28:53.189373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.727 [2024-11-19 18:28:53.189405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.727 [2024-11-19 18:28:53.189414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.727 [2024-11-19 18:28:53.189580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.727 [2024-11-19 18:28:53.189735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.727 [2024-11-19 18:28:53.189742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.727 [2024-11-19 18:28:53.189747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.727 [2024-11-19 18:28:53.189753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.989 [2024-11-19 18:28:53.201661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.989 [2024-11-19 18:28:53.202126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-19 18:28:53.202141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.989 [2024-11-19 18:28:53.202147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.989 [2024-11-19 18:28:53.202302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.989 [2024-11-19 18:28:53.202455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.989 [2024-11-19 18:28:53.202462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.202467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.202472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 [2024-11-19 18:28:53.214399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.990 [2024-11-19 18:28:53.214747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-19 18:28:53.214764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.990 [2024-11-19 18:28:53.214770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.990 [2024-11-19 18:28:53.214927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.990 [2024-11-19 18:28:53.215079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.990 [2024-11-19 18:28:53.215086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.215091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.215096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 [2024-11-19 18:28:53.227136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.990 [2024-11-19 18:28:53.227553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-19 18:28:53.227583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.990 [2024-11-19 18:28:53.227592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.990 [2024-11-19 18:28:53.227762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.990 [2024-11-19 18:28:53.227917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.990 [2024-11-19 18:28:53.227925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.227930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.227936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 [2024-11-19 18:28:53.239839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.990 [2024-11-19 18:28:53.240212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-19 18:28:53.240228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.990 [2024-11-19 18:28:53.240234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.990 [2024-11-19 18:28:53.240386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.990 [2024-11-19 18:28:53.240539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.990 [2024-11-19 18:28:53.240546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.240551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.240556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 [2024-11-19 18:28:53.252452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.990 [2024-11-19 18:28:53.253069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-19 18:28:53.253101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.990 [2024-11-19 18:28:53.253110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.990 [2024-11-19 18:28:53.253284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.990 [2024-11-19 18:28:53.253439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.990 [2024-11-19 18:28:53.253450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.253456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.253461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 [2024-11-19 18:28:53.265074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.990 [2024-11-19 18:28:53.265664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-19 18:28:53.265696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.990 [2024-11-19 18:28:53.265705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.990 [2024-11-19 18:28:53.265872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.990 [2024-11-19 18:28:53.266027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.990 [2024-11-19 18:28:53.266034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.266040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.266047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 [2024-11-19 18:28:53.277807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.990 [2024-11-19 18:28:53.278373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-19 18:28:53.278405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.990 [2024-11-19 18:28:53.278415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.990 [2024-11-19 18:28:53.278582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.990 [2024-11-19 18:28:53.278737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.990 [2024-11-19 18:28:53.278744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.278750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.278757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 [2024-11-19 18:28:53.290523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.990 [2024-11-19 18:28:53.290991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-19 18:28:53.291007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.990 [2024-11-19 18:28:53.291013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.990 [2024-11-19 18:28:53.291168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.990 [2024-11-19 18:28:53.291321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.990 [2024-11-19 18:28:53.291328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.291334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.291344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 [2024-11-19 18:28:53.303249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.990 [2024-11-19 18:28:53.303659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-19 18:28:53.303672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.990 [2024-11-19 18:28:53.303679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.990 [2024-11-19 18:28:53.303830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.990 [2024-11-19 18:28:53.303981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.990 [2024-11-19 18:28:53.303988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.303994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.303999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 [2024-11-19 18:28:53.315894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.990 [2024-11-19 18:28:53.316380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-19 18:28:53.316394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.990 [2024-11-19 18:28:53.316400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.990 [2024-11-19 18:28:53.316550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.990 [2024-11-19 18:28:53.316702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.990 [2024-11-19 18:28:53.316708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.990 [2024-11-19 18:28:53.316714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.990 [2024-11-19 18:28:53.316718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.990 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.991 [2024-11-19 18:28:53.328615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.991 [2024-11-19 18:28:53.328816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-19 18:28:53.328828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.991 [2024-11-19 18:28:53.328834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.991 [2024-11-19 18:28:53.328984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.991 [2024-11-19 18:28:53.329136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.991 [2024-11-19 18:28:53.329143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.991 [2024-11-19 18:28:53.329152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.991 [2024-11-19 18:28:53.329162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.991 [2024-11-19 18:28:53.341339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.991 [2024-11-19 18:28:53.341699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-19 18:28:53.341712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.991 [2024-11-19 18:28:53.341717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.991 [2024-11-19 18:28:53.341868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.991 [2024-11-19 18:28:53.342019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.991 [2024-11-19 18:28:53.342026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.991 [2024-11-19 18:28:53.342031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.991 [2024-11-19 18:28:53.342036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.991 [2024-11-19 18:28:53.354074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.991 [2024-11-19 18:28:53.354409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-19 18:28:53.354423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.991 [2024-11-19 18:28:53.354428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.991 [2024-11-19 18:28:53.354579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.991 [2024-11-19 18:28:53.354730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.991 [2024-11-19 18:28:53.354737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.991 [2024-11-19 18:28:53.354742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.991 [2024-11-19 18:28:53.354747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.991 [2024-11-19 18:28:53.362153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.991 [2024-11-19 18:28:53.366778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.991 [2024-11-19 18:28:53.367240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-19 18:28:53.367254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.991 [2024-11-19 18:28:53.367259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.991 [2024-11-19 18:28:53.367410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.991 [2024-11-19 18:28:53.367561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.991 [2024-11-19 18:28:53.367571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.991 [2024-11-19 18:28:53.367576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.991 [2024-11-19 18:28:53.367581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.991 [2024-11-19 18:28:53.379471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.991 [2024-11-19 18:28:53.379937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-19 18:28:53.379950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.991 [2024-11-19 18:28:53.379955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.991 [2024-11-19 18:28:53.380105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.991 [2024-11-19 18:28:53.380260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.991 [2024-11-19 18:28:53.380267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.991 [2024-11-19 18:28:53.380272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.991 [2024-11-19 18:28:53.380277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.991 [2024-11-19 18:28:53.392164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.991 [2024-11-19 18:28:53.392608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-19 18:28:53.392621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.991 [2024-11-19 18:28:53.392627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.991 [2024-11-19 18:28:53.392777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.991 [2024-11-19 18:28:53.392928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.991 [2024-11-19 18:28:53.392935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.991 [2024-11-19 18:28:53.392940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.991 [2024-11-19 18:28:53.392946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.991 [2024-11-19 18:28:53.404840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.991 [2024-11-19 18:28:53.405282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-19 18:28:53.405295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.991 [2024-11-19 18:28:53.405301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.991 [2024-11-19 18:28:53.405452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.991 Malloc0 00:29:51.991 [2024-11-19 18:28:53.405607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.991 [2024-11-19 18:28:53.405614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.991 [2024-11-19 18:28:53.405619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.991 [2024-11-19 18:28:53.405624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.991 [2024-11-19 18:28:53.417517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.991 [2024-11-19 18:28:53.417854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-19 18:28:53.417867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.991 [2024-11-19 18:28:53.417873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.991 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.991 [2024-11-19 18:28:53.418024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.991 [2024-11-19 18:28:53.418178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.991 [2024-11-19 18:28:53.418186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.991 [2024-11-19 18:28:53.418191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.992 [2024-11-19 18:28:53.418196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.992 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.992 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.992 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.992 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.992 [2024-11-19 18:28:53.430221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:51.992 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.992 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.992 [2024-11-19 18:28:53.430672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-19 18:28:53.430685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165c000 with addr=10.0.0.2, port=4420 00:29:51.992 [2024-11-19 18:28:53.430691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165c000 is same with the state(6) to be set 00:29:51.992 [2024-11-19 18:28:53.430842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165c000 (9): Bad file descriptor 00:29:51.992 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:51.992 [2024-11-19 18:28:53.430994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:51.992 [2024-11-19 18:28:53.431001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:51.992 [2024-11-19 18:28:53.431009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:51.992 [2024-11-19 18:28:53.431014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:51.992 [2024-11-19 18:28:53.437079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.992 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.992 18:28:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2170342 00:29:51.992 [2024-11-19 18:28:53.442905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:52.253 [2024-11-19 18:28:53.545973] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:53.453 4774.86 IOPS, 18.65 MiB/s [2024-11-19T17:28:55.866Z] 5867.88 IOPS, 22.92 MiB/s [2024-11-19T17:28:56.808Z] 6694.33 IOPS, 26.15 MiB/s [2024-11-19T17:28:58.190Z] 7348.80 IOPS, 28.71 MiB/s [2024-11-19T17:28:59.129Z] 7907.09 IOPS, 30.89 MiB/s [2024-11-19T17:29:00.069Z] 8363.83 IOPS, 32.67 MiB/s [2024-11-19T17:29:01.011Z] 8760.23 IOPS, 34.22 MiB/s [2024-11-19T17:29:01.955Z] 9081.29 IOPS, 35.47 MiB/s [2024-11-19T17:29:01.955Z] 9375.20 IOPS, 36.62 MiB/s 00:30:00.484 Latency(us) 00:30:00.484 [2024-11-19T17:29:01.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.484 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:00.484 Verification LBA range: start 0x0 length 0x4000 00:30:00.484 Nvme1n1 : 15.01 9378.38 36.63 13778.70 0.00 5508.84 378.88 14308.69 00:30:00.484 [2024-11-19T17:29:01.955Z] =================================================================================================================== 00:30:00.484 [2024-11-19T17:29:01.955Z] Total : 9378.38 36.63 13778.70 0.00 5508.84 378.88 14308.69 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:00.484 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:00.484 rmmod nvme_tcp 00:30:00.745 rmmod nvme_fabrics 00:30:00.745 rmmod nvme_keyring 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2171557 ']' 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2171557 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2171557 ']' 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2171557 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.745 18:29:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2171557 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2171557' 00:30:00.745 killing process with pid 2171557 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2171557 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2171557 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.745 18:29:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.307 18:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.307 00:30:03.307 real 0m28.068s 00:30:03.307 user 1m2.688s 00:30:03.307 sys 0m7.607s 00:30:03.307 18:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.307 18:29:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:03.307 ************************************ 00:30:03.307 END TEST nvmf_bdevperf 00:30:03.307 ************************************ 00:30:03.307 18:29:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:03.307 18:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:03.307 18:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.307 18:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.307 ************************************ 00:30:03.308 START TEST nvmf_target_disconnect 00:30:03.308 ************************************ 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:03.308 * Looking for test storage... 00:30:03.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.308 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.309 --rc genhtml_branch_coverage=1 00:30:03.309 --rc genhtml_function_coverage=1 00:30:03.309 --rc genhtml_legend=1 00:30:03.309 --rc geninfo_all_blocks=1 00:30:03.309 --rc geninfo_unexecuted_blocks=1 00:30:03.309 00:30:03.309 ' 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.309 --rc genhtml_branch_coverage=1 00:30:03.309 --rc genhtml_function_coverage=1 00:30:03.309 --rc genhtml_legend=1 00:30:03.309 --rc geninfo_all_blocks=1 00:30:03.309 --rc geninfo_unexecuted_blocks=1 00:30:03.309 00:30:03.309 ' 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.309 --rc genhtml_branch_coverage=1 00:30:03.309 --rc genhtml_function_coverage=1 00:30:03.309 --rc genhtml_legend=1 00:30:03.309 --rc geninfo_all_blocks=1 00:30:03.309 --rc geninfo_unexecuted_blocks=1 00:30:03.309 00:30:03.309 ' 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.309 --rc genhtml_branch_coverage=1 00:30:03.309 --rc genhtml_function_coverage=1 00:30:03.309 --rc genhtml_legend=1 00:30:03.309 --rc geninfo_all_blocks=1 00:30:03.309 --rc geninfo_unexecuted_blocks=1 00:30:03.309 00:30:03.309 ' 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.309 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.310 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:03.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:03.311 18:29:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.450 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:11.451 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:11.451 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:11.451 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:11.451 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:11.451 18:29:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:30:11.451 00:30:11.451 --- 10.0.0.2 ping statistics --- 00:30:11.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.451 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:30:11.451 00:30:11.451 --- 10.0.0.1 ping statistics --- 00:30:11.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.451 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:11.451 ************************************ 00:30:11.451 START TEST nvmf_target_disconnect_tc1 00:30:11.451 ************************************ 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:11.451 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:11.452 [2024-11-19 18:29:12.267518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.452 [2024-11-19 18:29:12.267623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2193ad0 with addr=10.0.0.2, port=4420 00:30:11.452 [2024-11-19 18:29:12.267660] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:11.452 [2024-11-19 18:29:12.267679] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:11.452 [2024-11-19 18:29:12.267688] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:11.452 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:11.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:11.452 Initializing NVMe Controllers 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:11.452 00:30:11.452 real 0m0.148s 00:30:11.452 user 0m0.062s 00:30:11.452 sys 0m0.086s 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:11.452 ************************************ 00:30:11.452 END TEST nvmf_target_disconnect_tc1 00:30:11.452 ************************************ 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:11.452 ************************************ 00:30:11.452 START TEST nvmf_target_disconnect_tc2 00:30:11.452 ************************************ 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2177633 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2177633 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2177633 ']' 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.452 18:29:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.452 [2024-11-19 18:29:12.433902] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:30:11.452 [2024-11-19 18:29:12.433963] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.452 [2024-11-19 18:29:12.534152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.452 [2024-11-19 18:29:12.589728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.452 [2024-11-19 18:29:12.589778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.452 [2024-11-19 18:29:12.589787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.452 [2024-11-19 18:29:12.589794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.452 [2024-11-19 18:29:12.589801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.452 [2024-11-19 18:29:12.591834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:11.452 [2024-11-19 18:29:12.591991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:11.452 [2024-11-19 18:29:12.592149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:11.452 [2024-11-19 18:29:12.592150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.025 Malloc0 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.025 [2024-11-19 18:29:13.349695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.025 [2024-11-19 18:29:13.390108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2177761 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:12.025 18:29:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.600 18:29:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2177633 00:30:14.600 18:29:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Write completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 Read completed with error (sct=0, sc=8) 00:30:14.600 starting I/O failed 00:30:14.600 [2024-11-19 18:29:15.425206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.600 [2024-11-19 18:29:15.425673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.425699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.425911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.425923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.426177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.426189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.426438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.426450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.426764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.426781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.427057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.427069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.427397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.427411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.427692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.427704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.428009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.428022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.428434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.428447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.428799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.428813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.429069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.429082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.429278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.429291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.429649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.429662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.600 [2024-11-19 18:29:15.429929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.600 [2024-11-19 18:29:15.429940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.600 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.430135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.430150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.430496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.430507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.430722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.430733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.430948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.430960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.431169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.431182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.431568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.431580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.431885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.431897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.432184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.432196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.432503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.432516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.432782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.432793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.433093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.433104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.433421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.433432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.433636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.433647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.433936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.433948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.434246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.434259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.434614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.434626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.434818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.434829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.435191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.435203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.435423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.435434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.435640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.435651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.435878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.435889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.436116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.436127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.436452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.436466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.436650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.436661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.436849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.436862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.437153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.437169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.437574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.437586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.437926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.437939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.438177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.438191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.438482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.438494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.438794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.438806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.439003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.439016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.439324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.439338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.439673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.439684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.439955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.439966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.440272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.440284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.440592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.440605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.601 [2024-11-19 18:29:15.440941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.601 [2024-11-19 18:29:15.440953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.601 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.441237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.441249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.441566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.441577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.441861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.441872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.442048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.442060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.442375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.442387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.442696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.442707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.442986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.442997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.443290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.443303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.443607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.443619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.443906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.443917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.444191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.444203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.444506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.444517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.444801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.444812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.445072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.445083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.445404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.445417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.445736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.445747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.446040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.446053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.446376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.446388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.446710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.446723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.447050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.447061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.447355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.447367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.447625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.447636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.447924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.447935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.448259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.448271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.448554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.448566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.448841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.448852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.449172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.449185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.449482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.449493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.449786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.449799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.450091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.450103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.450391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.450405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.450732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.450744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.451074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.451086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.451412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.451425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.451727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.451739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.452016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.452029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.452366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.602 [2024-11-19 18:29:15.452379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.602 qpair failed and we were unable to recover it. 00:30:14.602 [2024-11-19 18:29:15.452662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.452676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.452998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.453012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.453326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.453338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.453649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.453662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.454000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.454011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.454324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.454338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.454634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.454648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.454971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.454985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.455290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.455304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.455599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.455614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.455917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.455932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.456277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.456294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.456608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.456622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.456964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.456979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.457278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.457291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.457608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.457623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.457919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.457935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.458273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.458288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.458648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.458662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.459005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.459020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.459332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.459349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.459660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.459675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.459981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.459995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.460339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.460355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.460672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.460686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.460981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.460997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.461299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.461313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.461631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.461645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.461995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.462009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.462330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.462346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.462661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.462675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.462959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.462973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.463277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.463292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.463611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.463627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.463923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.463937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.464238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.464253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.464573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.464588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.464880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.603 [2024-11-19 18:29:15.464896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.603 qpair failed and we were unable to recover it. 00:30:14.603 [2024-11-19 18:29:15.465222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.465236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.465528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.465542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.465831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.465845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.466179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.466198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.466516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.466532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.466852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.466869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.467170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.467186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.467506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.467523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.467835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.467851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.468177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.468197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.468502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.468520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.468822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.468839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.469141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.469166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.469480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.469497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.469799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.469816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.470098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.470114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.470399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.470415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.470721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.470737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.471059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.471076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.471367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.471384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.471743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.471760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.472067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.472083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.472385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.472401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.472703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.472721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.473023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.473039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.473371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.473389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.473698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.473714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.474020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.474037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.474334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.474351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.474645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.474663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.474845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.474864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.475182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.604 [2024-11-19 18:29:15.475201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.604 qpair failed and we were unable to recover it. 00:30:14.604 [2024-11-19 18:29:15.475497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.475512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.475814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.475831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.476036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.476052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.476388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.476405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.476709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.476727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.476924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.476942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.477245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.477261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.477591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.477609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.477919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.477935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.478149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.478172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.478453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.478469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.478760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.478777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.479095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.479111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.479333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.479349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.479662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.479678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.479986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.480003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.480383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.480400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.480713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.480730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.481035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.481055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.481396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.481414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.481719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.481736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.482112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.482131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.482452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.482479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.482794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.482811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.483121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.483138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.483435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.483452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.483637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.483655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.483969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.483985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.484282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.484299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.484594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.484611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.484917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.484934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.485252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.485269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.485590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.485606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.485987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.486003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.486282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.486298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.486612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.486628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.486941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.486957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.605 [2024-11-19 18:29:15.487267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.605 [2024-11-19 18:29:15.487285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.605 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.487588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.487605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.487936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.487952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.488281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.488300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.488601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.488619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.489010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.489027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.489333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.489350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.489661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.489677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.490007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.490028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.490361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.490377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.490688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.490704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.491030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.491047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.491230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.491249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.491579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.491596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.491913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.491930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.492247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.492264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.492569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.492586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.492915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.492932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.493251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.493268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.493573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.493589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.493932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.493948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.494278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.494296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.494606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.494623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.494966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.494982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.495309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.495327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.495522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.495540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.495791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.495807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.496128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.496145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.496487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.496504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.496825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.496840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.497168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.497185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.497490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.497511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.497839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.497855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.498077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.498094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.498363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.498381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.498718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.498738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.499062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.499079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.499390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.499407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.606 [2024-11-19 18:29:15.499724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.606 [2024-11-19 18:29:15.499741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.606 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.500061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.500078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.500373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.500390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.500720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.500737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.501059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.501076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.501381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.501399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.501766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.501783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.502113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.502130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.502449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.502467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.502801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.502818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.503143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.503175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.503497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.503514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.503772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.503788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.504106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.504123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.504432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.504450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.504776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.504792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.505118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.505135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.505457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.505474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.505808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.505825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.506122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.506139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.506439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.506456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.506854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.506870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.507198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.507216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.507559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.507575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.507909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.507926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.508248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.508265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.508652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.508668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.508949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.508965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.509288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.509305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.509635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.509650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.509975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.509991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.510318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.510336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.510648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.510664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.511009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.511026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.511359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.511377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.511674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.511691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.512002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.512018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.512356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.512374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.512557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.607 [2024-11-19 18:29:15.512576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.607 qpair failed and we were unable to recover it. 00:30:14.607 [2024-11-19 18:29:15.512923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.512940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.513277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.513297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.513594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.513610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.513826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.513842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.514146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.514169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.514466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.514483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.514788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.514804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.515007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.515023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.515325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.515342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.515682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.515699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.516003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.516020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.516363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.516380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.516704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.516721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.517063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.517080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.517397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.517413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.517631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.517647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.517977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.517993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.518332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.518349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.518665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.518682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.518953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.518969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.519295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.519311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.519593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.519610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.519945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.519961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.520263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.520280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.520630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.520647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.520942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.520959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.521290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.521310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.521631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.521648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.521941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.521957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.522278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.522296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.522613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.522629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.522948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.522963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.523208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.523225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.523566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.523583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.523880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.523897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.524244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.524262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.524597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.524613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.525022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.608 [2024-11-19 18:29:15.525040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.608 qpair failed and we were unable to recover it. 00:30:14.608 [2024-11-19 18:29:15.525339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.525355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.525697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.525715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.526059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.526075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.526407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.526425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.526781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.526798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.527127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.527145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.527484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.527502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.527861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.527878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.528209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.528227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.528508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.528523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.528840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.528856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.529182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.529200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.529499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.529516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.529826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.529843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.530095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.530111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.530440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.530462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.530775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.530792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.531013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.531029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.531338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.531356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.531687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.531703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.532039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.532055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.532401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.532420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.532641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.532657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.533000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.533017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.533328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.533345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.533674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.533692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.534011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.534028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.534367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.534385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.534694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.534711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.535040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.535058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.535375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.535392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.535721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.535739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.536083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.536100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.609 qpair failed and we were unable to recover it. 00:30:14.609 [2024-11-19 18:29:15.536413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.609 [2024-11-19 18:29:15.536431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.536770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.536787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.537092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.537110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.537442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.537461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.537813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.537830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.538165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.538183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.538501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.538518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.538851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.538868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.539196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.539213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.539533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.539551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.539882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.539899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.540227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.540244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.540565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.540581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.540912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.540928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.541304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.541320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.541723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.541739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.542040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.542058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.542371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.542388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.542716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.542735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.543064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.543081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.543413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.543430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.543752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.543768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.544099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.544117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.544460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.544477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.544749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.544766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.545062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.545079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.545417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.545436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.545765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.545781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.546181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.546198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.546527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.546544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.546864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.546882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.547207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.547225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.547564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.547580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.547908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.547924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.548253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.548272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.548507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.548523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.548874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.548892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.549226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.549245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.610 [2024-11-19 18:29:15.549568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.610 [2024-11-19 18:29:15.549584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.610 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.549911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.549927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.550255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.550272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.550606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.550622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.550953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.550971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.551295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.551314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.551644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.551661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.551969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.551985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.552310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.552328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.552657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.552674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.553026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.553043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.553362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.553380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.553707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.553728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.554052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.554069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.554454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.554472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.554791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.554807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.555005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.555021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.555340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.555364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.555696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.555713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.556038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.556056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.556386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.556404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.556731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.556747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.556957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.556975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.557303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.557320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.557657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.557674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.557999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.558016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.558325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.558342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.558670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.558687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.559010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.559029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.559331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.559349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.559690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.559709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.560041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.560058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.560385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.560403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.560738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.560755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.561080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.561098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.561430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.561449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.561766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.561782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.562109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.562125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.611 qpair failed and we were unable to recover it. 00:30:14.611 [2024-11-19 18:29:15.562442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.611 [2024-11-19 18:29:15.562461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.562667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.562689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.563020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.563040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.563387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.563407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.563737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.563756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.564101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.564118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.564450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.564470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.564805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.564824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.565144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.565170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.565512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.565529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.565854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.565872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.566208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.566224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.566542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.566559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.566892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.566909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.567084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.567108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.567422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.567441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.567788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.567804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.568128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.568145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.568512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.568531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.568912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.568930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.569269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.569288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.569619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.569637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.569962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.569980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.570317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.570334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.570703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.570720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.571051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.571068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.571379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.571396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.571715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.571735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.571954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.571979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.572292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.572309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.572678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.572695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.573020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.573038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.573341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.573358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.573684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.573702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.574024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.574043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.574382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.574400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.574605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.574623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.574965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.574983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.612 [2024-11-19 18:29:15.575307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.612 [2024-11-19 18:29:15.575324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.612 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.575656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.575674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.576008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.576027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.576356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.576375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.576694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.576712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.577044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.577061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.577284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.577302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.577625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.577642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.577971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.577987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.578327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.578346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.578680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.578698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.578922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.578938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.579261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.579278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.579659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.579676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.580003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.580019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.580219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.580238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.580569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.580586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.580914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.580933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.581305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.581323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.581650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.581666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.581874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.581891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.582239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.582255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.582629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.582646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.582982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.583002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.583329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.583346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.583683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.583702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.584029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.584046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.584378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.584394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.584722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.584739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.585070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.585090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.585382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.585400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.585724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.585743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.586064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.586080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.586368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.586385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.586720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.586738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.587070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.587088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.587409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.587428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.587751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.587770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.613 [2024-11-19 18:29:15.588094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.613 [2024-11-19 18:29:15.588111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.613 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.589846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.589895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.590228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.590251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.590590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.590608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.590952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.590970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.591292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.591310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.591640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.591658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.591978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.591997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.592223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.592242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.592535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.592551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.592767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.592787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.593122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.593139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.593462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.593480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.593719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.593737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.594069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.594086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.594388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.594405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.594742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.594759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.595081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.595100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.595430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.595448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.595776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.595794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.596125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.596145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.597073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.597092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.597412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.597431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.597776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.597794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.598135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.598154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.598564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.598581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.598912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.598929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.599134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.599152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.599462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.599480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.599826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.599845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.600170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.600190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.600485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.600502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.600832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.600851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.601183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.601201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.614 [2024-11-19 18:29:15.601553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.614 [2024-11-19 18:29:15.601572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.614 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.601903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.601922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.602303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.602321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.602688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.602706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.603039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.603056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.603262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.603279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.603628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.603646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.603972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.603993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.604319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.604337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.604678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.604696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.605015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.605032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.605368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.605387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.605790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.605807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.606136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.606165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.606460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.606478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.606805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.606825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.607165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.607182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.607478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.607494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.607829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.607847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.608186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.608205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.608416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.608433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.608775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.608793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.609172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.609191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.609522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.609540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.609738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.609755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.610098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.610115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.610454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.610472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.610703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.610721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.611032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.611050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.611386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.611403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.611727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.611744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.612075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.612094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.612434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.612451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.612787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.612806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.613125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.613143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.613471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.613491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.613696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.613716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.614048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.615 [2024-11-19 18:29:15.614069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.615 qpair failed and we were unable to recover it. 00:30:14.615 [2024-11-19 18:29:15.614403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.614421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.614744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.614763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.614975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.614994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.615332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.615350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.615686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.615706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.616032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.616051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.616381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.616399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.616602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.616619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.616959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.616977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.617312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.617330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.617601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.617618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.617944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.617962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.618194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.618211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.618557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.618575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.618770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.618786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.619129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.619146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.619448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.619466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.619798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.619814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.620157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.620183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.620509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.620528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.620920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.620937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.621304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.621323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.621688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.621706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.621993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.622009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.622236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.622254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.622579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.622597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.622796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.622814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.623148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.623173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.623400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.623416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.623739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.623756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.624090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.624109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.624440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.624458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.624787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.624807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.625127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.625145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.625483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.625502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.625830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.625848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.626193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.626211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.626544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.616 [2024-11-19 18:29:15.626561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.616 qpair failed and we were unable to recover it. 00:30:14.616 [2024-11-19 18:29:15.626786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.626802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.627131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.627150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.627470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.627489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.627819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.627836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.628173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.628190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.628516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.628537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.628762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.628779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.629104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.629123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.629335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.629353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.629565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.629586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.629915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.629933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.630260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.630278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.630489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.630506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.630839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.630856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.631184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.631202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.631460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.631478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.631823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.631842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.632183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.632202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.632533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.632552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.632881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.632900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.633227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.633247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.633539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.633556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.633882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.633902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.634232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.634250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.634585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.634602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.634933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.634951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.635285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.635302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.635643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.635661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.635996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.636016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.636420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.636439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.636773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.636791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.637117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.637135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.637367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.637388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.637746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.637763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.638096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.638116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.638417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.638435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.638764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.638782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.639119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.617 [2024-11-19 18:29:15.639137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.617 qpair failed and we were unable to recover it. 00:30:14.617 [2024-11-19 18:29:15.639486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.639504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.639833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.639853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.640187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.640207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.640563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.640580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.640908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.640926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.641257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.641275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.641608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.641626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.641967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.641984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.642313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.642333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.642674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.642693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.643029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.643048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.643230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.643249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.643645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.643663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.644003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.644019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.644337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.644354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.644709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.644726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.645057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.645075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.645382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.645399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.645725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.645744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.646088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.646105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.646329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.646346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.646714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.646734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.646932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.646949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.647183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.647201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.647535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.647551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.647887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.647905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.648252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.648270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.648599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.648615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.648946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.648964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.649298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.649318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.649652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.649670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.650003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.650021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.650341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.650359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.618 [2024-11-19 18:29:15.650747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.618 [2024-11-19 18:29:15.650765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.618 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.651092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.651109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.651474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.651494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.651830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.651850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.652185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.652204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.652433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.652449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.652782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.652800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.653139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.653175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.653538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.653555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.653886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.653906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.654233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.654251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.654588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.654604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.654941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.654958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.655290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.655307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.655679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.655696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.656033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.656053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.656274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.656292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.656645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.656663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.657001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.657019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.657341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.657358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.657688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.657706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.658036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.658055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.658434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.658453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.658744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.658761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.658975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.658991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.659330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.659350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.659684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.659701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.660036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.660054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.660406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.660424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.660743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.660763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.661093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.661110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.661452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.661472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.661799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.661817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.662155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.662183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.662509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.662527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.662855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.662874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.663211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.663229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.663560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.663578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.619 [2024-11-19 18:29:15.663807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.619 [2024-11-19 18:29:15.663823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.619 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.664120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.664138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.664341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.664359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.664707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.664724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.664962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.664979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.665124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.665143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.665521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.665539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.665874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.665892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.666224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.666242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.666589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.666605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.666943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.666960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.667297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.667317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.667536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.667553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.667897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.667915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.668251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.668268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.668603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.668621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.668961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.668979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.669318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.669337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.669587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.669612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.669950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.669969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.670306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.670325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.670667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.670685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.671012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.671030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.671388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.671407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.671737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.671754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.671963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.671982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.672325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.672345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.672683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.672702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.673027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.673045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.673325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.673345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.673696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.673715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.673938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.673956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.674291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.674311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.674638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.674655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.674991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.675008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.675351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.675372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.675707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.675725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.676064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.676084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.676421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.620 [2024-11-19 18:29:15.676440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.620 qpair failed and we were unable to recover it. 00:30:14.620 [2024-11-19 18:29:15.678010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.678067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.678456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.678479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.678815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.678833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.679865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.679908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.680288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.680309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.680640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.680657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.680993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.681016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.681324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.681342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.681690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.681709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.682048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.682067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.682410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.682429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.682768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.682786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.683131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.683148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.683498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.683517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.683845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.683864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.684201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.684219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.684570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.684588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.684925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.684943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.686500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.686547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.686915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.686936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.687280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.687300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.687642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.687661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.688003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.688021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.688337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.688356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.688706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.688725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.689052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.689072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.689408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.689426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.689763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.689781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.690168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.690187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.690541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.690557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.690898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.690917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.691257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.691275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.691606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.691624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.691951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.691972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.692177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.692196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.692542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.692560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.692897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.692914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.693253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.621 [2024-11-19 18:29:15.693273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.621 qpair failed and we were unable to recover it. 00:30:14.621 [2024-11-19 18:29:15.693497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.693516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.693738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.693759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.694101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.694120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.694449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.694471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.694804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.694823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.695154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.695187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.695500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.695519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.695855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.695875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.696228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.696248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.697535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.697577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.697961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.697982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.698319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.698338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.698549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.698569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.698881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.698900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.699229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.699249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.699603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.699621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.699942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.699962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.700300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.700319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.700663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.700681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.701019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.701037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.701376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.701394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.701741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.701759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.702101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.702121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.702506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.702524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.703983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.704030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.704400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.704424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.704777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.704795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.705132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.705152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.705486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.705504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.705845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.705864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.706200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.706219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.706455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.706477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.706812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.706830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.707179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.707198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.707535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.707552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.707893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.707911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.708247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.708274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.708535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.622 [2024-11-19 18:29:15.708554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.622 qpair failed and we were unable to recover it. 00:30:14.622 [2024-11-19 18:29:15.708765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.708783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.709122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.709140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.710359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.710400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.710770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.710790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.711133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.711152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.711462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.711481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.711821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.711837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.712218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.712238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.712575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.712593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.712930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.712947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.713293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.713313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.713648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.713666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.714004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.714025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.714371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.714390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.714721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.714740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.714953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.714973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.715312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.715332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.715675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.715694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.715881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.715902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.716200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.716221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.717371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.717410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.717786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.717806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.718150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.718180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.718531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.718551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.718886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.718905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.719231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.719255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.719546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.719564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.719907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.719926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.720260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.720278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.720620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.720637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.720975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.720994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.721338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.721356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.623 [2024-11-19 18:29:15.721697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.623 [2024-11-19 18:29:15.721716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.623 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.721891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.721910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.722137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.722154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.722491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.722509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.722884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.722904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.723230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.723249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.723627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.723646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.723976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.723993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.724412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.724432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.724759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.724776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.725115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.725134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.726428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.726467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.726806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.726827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.727179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.727197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.727543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.727561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.727893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.727914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.728248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.728266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.728608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.728625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.728963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.728983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.729318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.729336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.729671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.729696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.730024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.730043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.730380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.730399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.730736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.730756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.731092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.731110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.731341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.731358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.731704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.731724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.732059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.732079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.732417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.732436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.732778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.732797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.733129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.733148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.733490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.733510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.733850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.733870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.734211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.734230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.734578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.734595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.734929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.734947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.735279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.735297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.735651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.735669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.624 [2024-11-19 18:29:15.736005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.624 [2024-11-19 18:29:15.736024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.624 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.736377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.736400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.736734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.736753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.737079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.737097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.737403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.737423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.737766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.737786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.738124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.738143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.738480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.738497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.738835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.738853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.739214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.739232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.739578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.739595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.739934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.739952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.740288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.740306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.740648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.740665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.741007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.741026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.741382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.741403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.741740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.741758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.742090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.742109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.742464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.742482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.742819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.742835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.743189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.743209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.743422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.743440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.744685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.744725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.745104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.745125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.745498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.745516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.745854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.745872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.746209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.746228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.746563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.746581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.746796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.746815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.747023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.747044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.747360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.747380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.747760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.747781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.748118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.748137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.748556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.748574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.748910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.748929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.749269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.749287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.749623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.749642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.749977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.749998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.625 [2024-11-19 18:29:15.750336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.625 [2024-11-19 18:29:15.750355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.625 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.750692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.750712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.751050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.751069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.751411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.751428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.751754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.751772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.752106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.752125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.752466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.752485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.752829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.752848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.753185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.753203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.753574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.753593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.753882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.753899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.754237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.754256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.754602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.754624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.754999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.755019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.755322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.755340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.755743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.755761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.756093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.756115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.756447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.756466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.756794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.756811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.757141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.757166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.757508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.757527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.757872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.757890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.758226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.758244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.758616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.758635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.758961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.758980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.759199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.759217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.759566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.759584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.759914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.759934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.760130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.760148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.760493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.760513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.760845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.760863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.761199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.761221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.761555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.761572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.761919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.761937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.762272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.762291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.762631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.762650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.762991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.763010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.763337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.763355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.763690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.626 [2024-11-19 18:29:15.763707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.626 qpair failed and we were unable to recover it. 00:30:14.626 [2024-11-19 18:29:15.764042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.764064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.764398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.764416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.764777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.765103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.765123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.765458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.765476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.765858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.765878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.766204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.766222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.766589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.766605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.766939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.766957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.767288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.767307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.767666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.767685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.768058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.768076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.768296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.768312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.768656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.768675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.769038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.769055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.769392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.769411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.769743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.769760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.770091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.770111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.770451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.770470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.770801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.770821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.771171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.771190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.771534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.771552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.771933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.771952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.772299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.772319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.772663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.772681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.773001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.773020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.773349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.773367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.773710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.773728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.774089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.774109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.774445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.774464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.774691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.774708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.775038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.775057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.775401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.775418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.775750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.775769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.776106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.776125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.776433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.776453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.776788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.627 [2024-11-19 18:29:15.776808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.627 qpair failed and we were unable to recover it. 00:30:14.627 [2024-11-19 18:29:15.777152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.777184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.777519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.777536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.777870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.777888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.778224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.778243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.778585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.778604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.778981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.779001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.779335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.779356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.779697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.779715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.780043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.780061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.780362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.780380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.780709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.780728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.781066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.781084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.781424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.781444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.781796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.781814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.782129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.782146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.782494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.782512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.782849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.782868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.783203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.783223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.783581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.783600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.783934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.783954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.784296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.784315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.784639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.784658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.784993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.785013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.785337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.785357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.785548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.785568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.785868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.785885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.786216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.786234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.786589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.786609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.786946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.786965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.787307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.787327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.787648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.787666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.788036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.788059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.788351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.788369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.788730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.788748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.628 qpair failed and we were unable to recover it. 00:30:14.628 [2024-11-19 18:29:15.789089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.628 [2024-11-19 18:29:15.789108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.789352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.789370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.789625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.789642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.789980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.789998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.790335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.790356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.790679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.790696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.791040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.791059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.791399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.791417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.791756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.791774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.792112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.792130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.792505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.792524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.792858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.792879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.793205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.793223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.793572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.793589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.793926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.793943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.794288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.794308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.794677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.794694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.795084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.795102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.795449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.795467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.795803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.795822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.796173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.796191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.796497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.796513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.796851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.796871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.797209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.797227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.797578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.797602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.797940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.797959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.798296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.798316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.798653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.798671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.799040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.799058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.799384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.799402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.799735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.799753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.800133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.800153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.800510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.800529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.800922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.800941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.801139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.801168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.801519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.801536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.801900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.801919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.802227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.802245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.629 [2024-11-19 18:29:15.802361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.629 [2024-11-19 18:29:15.802380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.629 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.802649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.802668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.802996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.803014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.803266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.803284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.803631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.803648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.803987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.804004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.804311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.804328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.804513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.804533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.804916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.804935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.805231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.805250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.805591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.805608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.805818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.805836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.806187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.806207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.806547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.806568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.806904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.806925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.807132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.807150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.807484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.807501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.807833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.807851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.808199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.808219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.808432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.808450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.808798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.808819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.809114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.809133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.809350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.809370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.809711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.809730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.810058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.810076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.810426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.810445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.810784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.810803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.811116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.811136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.811468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.811488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.811821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.811839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.812177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.812196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.812531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.812549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.812893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.812911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.813124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.813142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.813460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.813478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.813812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.813830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.814184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.814205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.814451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.814469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.814794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.630 [2024-11-19 18:29:15.814811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.630 qpair failed and we were unable to recover it. 00:30:14.630 [2024-11-19 18:29:15.815150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.815177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.815487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.815505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.815696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.815715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.815915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.815933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.816280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.816299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.816644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.816661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.816998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.817016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.817333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.817351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.817688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.817705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.818037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.818057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.818389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.818408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.818748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.818766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.819097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.819116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.819453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.819472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.819808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.819827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.820177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.820199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.820531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.820550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.820888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.820907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.821254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.821274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.821627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.821646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.821876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.821894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.822222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.822241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.822499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.822516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.822847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.822866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.823207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.823226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.823582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.823600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.823935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.823952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.824298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.824318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.824636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.824653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.824992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.825012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.825331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.825351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.825560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.825578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.825912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.825930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.826266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.826286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.826522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.826540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.826873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.826892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.827218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.827236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.827579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.631 [2024-11-19 18:29:15.827596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.631 qpair failed and we were unable to recover it. 00:30:14.631 [2024-11-19 18:29:15.827972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.827989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.828326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.828354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.828689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.828707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.829046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.829066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.829412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.829435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.829760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.829779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.830115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.830133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.830475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.830494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.830828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.830846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.831173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.831193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.831524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.831541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.831879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.831898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.832238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.832257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.832602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.832620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.832964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.832982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.833319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.833339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.833706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.833723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.834054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.834075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.834411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.834429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.834771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.834789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.835002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.835022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.835335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.835352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.835705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.835723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.836068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.836088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.836437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.836455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.836788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.836806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.837147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.837175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.837532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.837551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.837896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.837913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.838263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.838283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.838618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.838637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.838816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.838838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.839210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.839230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.839572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.632 [2024-11-19 18:29:15.839591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.632 qpair failed and we were unable to recover it. 00:30:14.632 [2024-11-19 18:29:15.839928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.839945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.840286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.840306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.840636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.840654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.841028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.841046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.841391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.841409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.841741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.841758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.842095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.842114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.842454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.842473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.842813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.842833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.843169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.843190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.843416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.843434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.843759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.843779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.844117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.844134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.844471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.844489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.844834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.844852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.845187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.845207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.845501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.845520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.845887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.845905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.846268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.846288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.846586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.846604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.846937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.846957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.847285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.847305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.847648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.847667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.848034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.848051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.848405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.848423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.848777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.848795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.849143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.849167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.849465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.849482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.849819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.849839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.850183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.850204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.850541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.850560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.850902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.850920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.851256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.851275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.851612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.851632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.851968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.851988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.852316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.852336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.852667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.852685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.853013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.633 [2024-11-19 18:29:15.853031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.633 qpair failed and we were unable to recover it. 00:30:14.633 [2024-11-19 18:29:15.853392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.853411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.853749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.853767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.854107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.854126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.854331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.854351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.854697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.854715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.855054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.855072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.855414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.855432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.855770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.855789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.856127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.856146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.856378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.856395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.856725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.856745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.857066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.857085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.857433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.857451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.857791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.857810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.858108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.858126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.858464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.858483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.858814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.858835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.859174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.859194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.859532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.859551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.859924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.859942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.860296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.860314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.860648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.860665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.861008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.861028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.861332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.861351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.861577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.861594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.861941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.861958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.862298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.862318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.862657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.862677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.863012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.863032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.863343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.863361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.863695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.863715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.864041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.864059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.864408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.864428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.864891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.864912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.865282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.865305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.865642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.865658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.865989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.866008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.866322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.634 [2024-11-19 18:29:15.866341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.634 qpair failed and we were unable to recover it. 00:30:14.634 [2024-11-19 18:29:15.866674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.866693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.867028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.867046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.867261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.867281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.867623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.867642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.868019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.868037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.868344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.868362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.868696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.868715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.869047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.869065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.869415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.869432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.869808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.869826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.870178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.870200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.870509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.870528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.870861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.870879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.871218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.871236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.871615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.871634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.871968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.871985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.872324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.872348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.872685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.872703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.873038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.873056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.873402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.873420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.873757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.873776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.874140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.874165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.874508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.874526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.874871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.874889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.875212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.875229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.875559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.875576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.875912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.875929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.876305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.876324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.876670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.876688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.877019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.877036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.877258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.877278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.877625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.877643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.877975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.877991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.878334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.878352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.878691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.878709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.879049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.879066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.879407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.879425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.879763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.635 [2024-11-19 18:29:15.879780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.635 qpair failed and we were unable to recover it. 00:30:14.635 [2024-11-19 18:29:15.880111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.880127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.880457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.880476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.880815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.880833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.881180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.881199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.881401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.881419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.881743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.881768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.882096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.882114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.882455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.882473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.882805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.882824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.883170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.883189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.883489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.883506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.883842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.883859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.884189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.884208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.884539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.884557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.884892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.884911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.885250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.885269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.885604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.885623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.885963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.885981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.886321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.886341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.886679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.886696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.887044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.887064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.887393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.887411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.887748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.887766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.888098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.888117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.888457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.888476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.888837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.888855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.889204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.889224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.889557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.889574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.889873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.889890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.890220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.890238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.890629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.890647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.890976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.890993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.891333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.891351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.891689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.891708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.892056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.892074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.892297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.892316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.892657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.892675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.893010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.893028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.636 qpair failed and we were unable to recover it. 00:30:14.636 [2024-11-19 18:29:15.893333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.636 [2024-11-19 18:29:15.893351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.893702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.893720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.894058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.894076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.894417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.894435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.894774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.894794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.895136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.895154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.895532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.895550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.895876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.895894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.896226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.896246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.896595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.896614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.896951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.896971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.897335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.897355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.897701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.897718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.898044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.898062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.898386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.898404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.898734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.898751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.899096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.899115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.899448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.899466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.899796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.899815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.900152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.900178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.900493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.900512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.900843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.900862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.901200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.901220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.901570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.901588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.901916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.901934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.902264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.902283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.902653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.902671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.903015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.903032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.903340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.903358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.903686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.903706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.904000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.904019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.904230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.904249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.904589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.904607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.637 qpair failed and we were unable to recover it. 00:30:14.637 [2024-11-19 18:29:15.904930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.637 [2024-11-19 18:29:15.904947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.905296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.905316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.905671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.905694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.906028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.906047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.906397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.906415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.906743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.906761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.907095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.907112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.907453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.907473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.907808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.907827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.908172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.908192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.908511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.908529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.908859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.908877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.909216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.909234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.909574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.909591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.909924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.909943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.910274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.910295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.910630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.910648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.910987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.911006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.911339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.911356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.911689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.911707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.912048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.912067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.912430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.912449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.912790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.912807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.913135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.913154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.913466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.913483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.913816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.913835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.914156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.914187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.914533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.914552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.914886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.914903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.915241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.915263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.915605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.915621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.915948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.915965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.916312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.916331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.916514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.916532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.916883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.916904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.917190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.917209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.917561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.917579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.917956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.917974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.638 [2024-11-19 18:29:15.918316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.638 [2024-11-19 18:29:15.918336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.638 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.918670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.918689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.919032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.919052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.919385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.919405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.919735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.919753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.920093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.920112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.920450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.920468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.920809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.920828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.921178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.921198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.921508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.921526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.921863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.921881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.922219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.922238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.922581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.922600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.922938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.922955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.923294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.923314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.923652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.923671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.924007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.924027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.924375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.924394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.924730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.924748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.925098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.925115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.925464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.925483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.925822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.925842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.926177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.926195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.926527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.926547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.926887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.926906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.927230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.927249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.927591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.927610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.927944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.927965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.928302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.928320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.928646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.928665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.928895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.928913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.929182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.929199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.929505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.929522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.929863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.929882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.930232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.930249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.931884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.931931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.932312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.932335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.932674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.932692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.639 [2024-11-19 18:29:15.933026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.639 [2024-11-19 18:29:15.933045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.639 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.933287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.933305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.933680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.933700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.933916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.933934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.934260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.934280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.934612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.934630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.934968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.934985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.935325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.935343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.935672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.935691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.936026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.936046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.936385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.936406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.936737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.936755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.937093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.937110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.937453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.937473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.937805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.937824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.938150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.938180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.938526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.938546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.939976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.940022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.940388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.940411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.941670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.941707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.942079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.942100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.942417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.942442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.942774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.942791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.943125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.943143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.943484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.943502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.943843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.943862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.944077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.944100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.944444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.944465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.944797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.944816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.945170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.945188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.945527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.945545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.945887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.945907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.946278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.946296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.947226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.947268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.947647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.947674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.948056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.948078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.948415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.948439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.948792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.948811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.949151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.640 [2024-11-19 18:29:15.949182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.640 qpair failed and we were unable to recover it. 00:30:14.640 [2024-11-19 18:29:15.949513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.949530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.949871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.949889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.950229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.950248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.950595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.950613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.950956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.950975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.951207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.951226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.951538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.951558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.951869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.951887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.952224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.952242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.952583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.952605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.952939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.952957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.953303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.953323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.953664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.953681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.954020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.954038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.954385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.954404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.954802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.954821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.955201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.955224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.955523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.955541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.955898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.955916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.956231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.956249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.956479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.956497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.956838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.956857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.957204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.957223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.957573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.957591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.957914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.957934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.958178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.958197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.958547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.958566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.958898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.958915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.959231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.959250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.959600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.959618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.959959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.959978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.960323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.960342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.960551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.960570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.960909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.641 [2024-11-19 18:29:15.960928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.641 qpair failed and we were unable to recover it. 00:30:14.641 [2024-11-19 18:29:15.961275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.961293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.961625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.961643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.961982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.962004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.962334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.962354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.962701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.962721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.963055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.963075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.964298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.964340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.964723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.964742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.964972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.964989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.965300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.965317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.965700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.965718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.966079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.966098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.966405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.966426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.966762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.966782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.967130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.967148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.967427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.967446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.967670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.967687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.967927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.967948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.968294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.968315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.968650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.968667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.968999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.969017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.969346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.969365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.969694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.969711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.970056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.970074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.970397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.970415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.970766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.970784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.971112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.971131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.971501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.971520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.971869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.971886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.972227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.972246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.972602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.972620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.972942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.972959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.973307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.973325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.973662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.973681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.974049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.974066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.974408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.974427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.974768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.974786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.975128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.642 [2024-11-19 18:29:15.975147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.642 qpair failed and we were unable to recover it. 00:30:14.642 [2024-11-19 18:29:15.975542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.975560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.975894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.975913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.976190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.976211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.976558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.976577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.976930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.976950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.977287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.977309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.977657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.977676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.977890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.977909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.978196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.978214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.978577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.978593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.978929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.978946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.979297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.979317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.979660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.979679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.980017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.980035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.980349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.980368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.980716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.980735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.981110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.981128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.981518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.981537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.981867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.981886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.982232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.982250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.982622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.982641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.983019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.983037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.983378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.983400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.983721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.983740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.984079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.984098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.984409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.984428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.984731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.984748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.985089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.985106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.985418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.985435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.985784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.985802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.986184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.986203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.986566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.986584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.986912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.986934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.987317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.987337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.987678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.987696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.988026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.988046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.988389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.988409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.643 [2024-11-19 18:29:15.988759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.643 [2024-11-19 18:29:15.988777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.643 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.989115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.989134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.989347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.989369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.989668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.989685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.990026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.990046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.990262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.990283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.990589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.990607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.990928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.990945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.991194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.991212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.991573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.991591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.991926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.991946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.992275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.992296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.992529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.992546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.992876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.992894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.993233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.993251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.993592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.993609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.993947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.993967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.994258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.994276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.994625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.994645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.994978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.994996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.995339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.995358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.995696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.995714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.996054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.996075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.996404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.996421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.996750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.996768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.996989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.997007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.997344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.997362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.997695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.997712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.998042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.998060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.998382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.998400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.998729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.998748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.999085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.999106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.999451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.999470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:15.999805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:15.999824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:16.000030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:16.000050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:16.000353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:16.000373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:16.000586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:16.000604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:16.000938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:16.000958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:16.001308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:16.001326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:16.001551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:16.001568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:16.001902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:16.001920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:16.002262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.644 [2024-11-19 18:29:16.002281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.644 qpair failed and we were unable to recover it. 00:30:14.644 [2024-11-19 18:29:16.002605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.002623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.002959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.002978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.003328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.003347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.003686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.003703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.004044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.004064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.004410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.004428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.004763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.004781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.005129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.005147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.005516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.005536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.005910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.005929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.006146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.006175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.006548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.006567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.006894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.006913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.007251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.007270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.007614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.007633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.007970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.007989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.008316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.008335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.008672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.008690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.009029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.009048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.009390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.009409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.009753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.009771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.010110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.010129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.010489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.010508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.010832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.010851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.011188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.011208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.011548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.011566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.011916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.011935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.012265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.012285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.012616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.012636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.012972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.012992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.013190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.013210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.013534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.013552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.013893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.013912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.014255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.014274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.014611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.014631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.014956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.014976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.015314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.015333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.015662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.015681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.016015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.016033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.016269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.016288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.016536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.645 [2024-11-19 18:29:16.016558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.645 qpair failed and we were unable to recover it. 00:30:14.645 [2024-11-19 18:29:16.016879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.016897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.017093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.017113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.018455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.018498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.018850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.018872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.019200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.019220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.020681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.020727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.021105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.021125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.021460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.021484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.021823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.021841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.022175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.022196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.022515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.022533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.022862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.022882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.023219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.023239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.024514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.024554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.024924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.024944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.025284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.025303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.025649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.025669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.026000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.026019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.026339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.026358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.026686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.026705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.027044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.027063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.027395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.027414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.027752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.027770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.028098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.028117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.028459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.028478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.028814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.028833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.029129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.029147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.029477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.029498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.029828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.029846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.030069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.030086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.030428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.030447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.030787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.030806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.031145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.031172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.031506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.031525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.031857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.031879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.032090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.032107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.032339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.032359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.646 [2024-11-19 18:29:16.032565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.646 [2024-11-19 18:29:16.032583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.646 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.032914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.032934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.033271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.033290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.033673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.033691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.034025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.034042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.034380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.034399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.034733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.034753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.035080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.035099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.036197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.036237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.036597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.036618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.036950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.036967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.037312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.037334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.037676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.037694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.038027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.038043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.038383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.038401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.038740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.038759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.039094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.039113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.039462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.039482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.039823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.039841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.040179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.040199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.040532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.040549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.040883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.040901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.647 [2024-11-19 18:29:16.041242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.647 [2024-11-19 18:29:16.041261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.647 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.041612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.041634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.042224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.042259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.042583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.042605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.042864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.042883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.043189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.043209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.043581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.043598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.043933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.043950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.044291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.044310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.044653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.044670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.045011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.045029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.045335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.045354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.045700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.045721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.046032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.046052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.046355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.046374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.046725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.046744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.047081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.047099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.047401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.047417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.922 [2024-11-19 18:29:16.047757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.922 [2024-11-19 18:29:16.047775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.922 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.048108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.048127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.048479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.048498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.048845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.048864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.049198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.049217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.049578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.049596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.049925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.049943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.050253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.050272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.050624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.050642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.050981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.051000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.051218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.051244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.051605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.051624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.051997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.052015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.052320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.052337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.052676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.052695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.052985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.053003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.053347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.053366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.054391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.054433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.054817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.054838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.055177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.055197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.055544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.055562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.055894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.055912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.056140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.056168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.056521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.056539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.056872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.056889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.057233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.057258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.057591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.057609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.057958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.057977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.058316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.058335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.058678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.058695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.059031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.059049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.059384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.059404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.059732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.059749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.060050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.060067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.060321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.060340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.060667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.060686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.061019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.061036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.923 [2024-11-19 18:29:16.061375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.923 [2024-11-19 18:29:16.061395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.923 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.061719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.061737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.062057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.062077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.062324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.062342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.062719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.062739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.063070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.063089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.063312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.063330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.063669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.063687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.064037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.064055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.064369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.064387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.064603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.064622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.064955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.064972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.065218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.065236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.065599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.065618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.065978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.065998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.066347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.066370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.066718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.066736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.066950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.066968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.067240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.067258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.067606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.067625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.067951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.067969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.068305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.068324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.068558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.068574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.068908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.068925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.069234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.069253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.069616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.069634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.069763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.069779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.069997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.070016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.070370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.070390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.070718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.070737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.071079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.071097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.071447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.071467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.071816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.071835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.072174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.072195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.072555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.072573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.072787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.072804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.073126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.073143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.073367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.073387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.924 [2024-11-19 18:29:16.073733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.924 [2024-11-19 18:29:16.073751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.924 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.073974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.073992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.074307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.074326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.074677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.074696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.075013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.075034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.075285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.075304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.075646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.075663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.075995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.076012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.076360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.076383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.076784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.076805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.077142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.077174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.077515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.077533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.077870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.077888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.078053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.078072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.078442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.078462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.078791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.078810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.079132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.079151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.079498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.079516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.079865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.079883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.080231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.080252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.080598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.080616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.080949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.080970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.081311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.081331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.081663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.081682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.081884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.081904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.082132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.082148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.082536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.082554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.082881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.082898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.083226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.083245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.083543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.083560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.083904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.083922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.084233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.084251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.084645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.084662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.084993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.085012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.085356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.085375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.085713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.085732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.086067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.086084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.086393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.925 [2024-11-19 18:29:16.086412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.925 qpair failed and we were unable to recover it. 00:30:14.925 [2024-11-19 18:29:16.086745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.086762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.087100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.087120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.087356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.087376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.087719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.087735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.088073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.088091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.088395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.088415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.088744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.088765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.088978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.088999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.089339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.089357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.089579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.089595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.089926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.089944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.090281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.090301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.090669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.090686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.091012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.091032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.091343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.091363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.091715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.091733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.091933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.091951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.092248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.092267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.092605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.092622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.092966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.092984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.093315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.093334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.093668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.093688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.094045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.094063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.094362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.094379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.094722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.094740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.095046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.095064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.095403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.095421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.095728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.095747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.095965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.095982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.096333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.096353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.096476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.096493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.096859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.096878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.097221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.097240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.097348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.097370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.097661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.097681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.098023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.098040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.098356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.098374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.098602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.926 [2024-11-19 18:29:16.098621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.926 qpair failed and we were unable to recover it. 00:30:14.926 [2024-11-19 18:29:16.098982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.099000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.099385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.099404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.099722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.099740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.100079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.100097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.100444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.100463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.100816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.100835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.101185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.101205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.101549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.101567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.101896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.101914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.102231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.102249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.102621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.102640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.102974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.102993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.103209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.103228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.103473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.103489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.103853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.103873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.104305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.104324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.104541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.104558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.104900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.104919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.105233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.105252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.105575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.105592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.105944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.105963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.106192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.106210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.106580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.106598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.106941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.106964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.107326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.107344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.107692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.107710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.108019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.108036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.108249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.108267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.108657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.108675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.109009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.109027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.109260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.109279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.109580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.927 [2024-11-19 18:29:16.109597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.927 qpair failed and we were unable to recover it. 00:30:14.927 [2024-11-19 18:29:16.109946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.109964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.110226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.110244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.110626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.110643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.110867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.110884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.111170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.111188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.111495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.111514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.111884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.111901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.112131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.112148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.112550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.112567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.112871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.112891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.113112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.113128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.113551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.113570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.113907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.113927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.114145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.114173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.114479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.114497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.114712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.114730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.115039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.115057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.115394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.115414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.115748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.115769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.116116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.116134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.116480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.116498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.116840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.116860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.117204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.117223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.117457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.117473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.117837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.117853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.118191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.118210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.118592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.118609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.118943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.118962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.119290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.119309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.119529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.119545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.119931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.119949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.120268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.120285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.120549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.120567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.120894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.120912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.121216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.121250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.121633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.121651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.121986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.122005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.122242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.928 [2024-11-19 18:29:16.122261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.928 qpair failed and we were unable to recover it. 00:30:14.928 [2024-11-19 18:29:16.122515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.122532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.122876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.122894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.123189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.123208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.123564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.123581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.123793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.123811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.124141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.124168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.124574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.124592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.124936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.124954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.125305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.125324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.125647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.125665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.126011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.126031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.126426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.126444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.126782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.126800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.127227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.127246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.127593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.127612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.127952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.127971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.128276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.128294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.128642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.128659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.128988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.129006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.129340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.129358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.129757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.129774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.130088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.130109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.130473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.130491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.130869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.130887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.131213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.131231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.131539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.131556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.131690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.131706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.132034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.132052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.132390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.132409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.132777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.132795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.133135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.133153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.133393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.133411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.133729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.133748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.133975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.133993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.134313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.134332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.134653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.134672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.135012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.135031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.135307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.929 [2024-11-19 18:29:16.135326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.929 qpair failed and we were unable to recover it. 00:30:14.929 [2024-11-19 18:29:16.135680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.135697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.136047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.136065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.136369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.136387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.136711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.136729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.137074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.137093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.137336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.137353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.137708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.137726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.138057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.138074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.138440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.138458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.138785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.138805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.139125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.139146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.139398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.139416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.139769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.139787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.139973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.139991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.140250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.140268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.140642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.140659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.140995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.141013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.141349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.141368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.141744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.141762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.142107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.142127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.142473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.142493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.142837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.142855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.143193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.143212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.143518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.143537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.143873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.143890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.144238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.144258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.144502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.144520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.144872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.144891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.145239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.145257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.145396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.145411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.145790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.145810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.146123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.146140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.146407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.146425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.146754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.146774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.147148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.147180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.147587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.147605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.147929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.147946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.148203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.148222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.930 qpair failed and we were unable to recover it. 00:30:14.930 [2024-11-19 18:29:16.148565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.930 [2024-11-19 18:29:16.148581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.148804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.148819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.149197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.149214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.149557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.149574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.149907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.149922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.150203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.150220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.150480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.150497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.150828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.150843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.151189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.151206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.151548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.151564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.151896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.151913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.152236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.152252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.152577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.152594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.152933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.152949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.153249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.153265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.153632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.153649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.153848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.153864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.154154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.154183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.154498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.154515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.154856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.154876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.155192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.155209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.155449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.155466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.155789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.155806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.156145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.156174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.156520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.156537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.156879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.156896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.157231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.157248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.157589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.157605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.157932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.157949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.158287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.158304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.158664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.158682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.159013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.159031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.159141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.159168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.159437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.931 [2024-11-19 18:29:16.159454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.931 qpair failed and we were unable to recover it. 00:30:14.931 [2024-11-19 18:29:16.159658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.159677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.160015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.160033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.160393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.160410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.160756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.160774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.161155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.161184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.161396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.161413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.161754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.161771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.162108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.162125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.162450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.162468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.162808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.162826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.163236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.163254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.163621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.163637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.163994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.164011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.164244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.164262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.164640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.164656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.164994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.165011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.165225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.165244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.165496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.165515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.165640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.165658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.165883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.165899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.166226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.166244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.166499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.166516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.166858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.166875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.167223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.167241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.167587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.167603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.167948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.167965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.168317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.168334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.168674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.168690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.169044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.169061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.169216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.169234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.169544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.169562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.169905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.169922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.170321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.170339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.170674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.170696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.170994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.171011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.171401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.171418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.171649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.171667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.172027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.932 [2024-11-19 18:29:16.172046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.932 qpair failed and we were unable to recover it. 00:30:14.932 [2024-11-19 18:29:16.172288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.172305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.172628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.172643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.172987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.173002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.173403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.173422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.173637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.173655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.174002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.174019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.174335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.174352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.174706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.174722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.175025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.175041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.175393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.175410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.175737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.175753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.176100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.176117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.176425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.176442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.176764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.176781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.177195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.177212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.177597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.177614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.177969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.177985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.178328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.178345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.178685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.178702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.179012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.179028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.179389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.179408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.179715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.179731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.180107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.180127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.180481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.180499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.180835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.180851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.181202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.181220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.181595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.181611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.182722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.182763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.183153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.183185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.184251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.184286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.184625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.184643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.186154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.186211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.186466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.186485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.186818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.186834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.187177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.187194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.187420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.187438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.187783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.187798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.188126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.933 [2024-11-19 18:29:16.188142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.933 qpair failed and we were unable to recover it. 00:30:14.933 [2024-11-19 18:29:16.188565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.188583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.188936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.188952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.189289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.189306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.189659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.189676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.190045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.190062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.190470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.190486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.190827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.190842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.191193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.191210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.191603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.191618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.191952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.191969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.192303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.192319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.192545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.192561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.192915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.192931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.193352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.193368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.193806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.193822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.194069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.194084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.194453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.194470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.194841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.194857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.195209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.195225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.195539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.195554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 [2024-11-19 18:29:16.195786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.934 [2024-11-19 18:29:16.195802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:14.934 qpair failed and we were unable to recover it. 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 [2024-11-19 18:29:16.196636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Read completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.934 Write completed with error (sct=0, sc=8) 00:30:14.934 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 [2024-11-19 18:29:16.197492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:14.935 [2024-11-19 18:29:16.197761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31e00 is same with the state(6) to be set 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Read completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 Write completed with error (sct=0, sc=8) 00:30:14.935 starting I/O failed 00:30:14.935 [2024-11-19 18:29:16.198757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:14.935 [2024-11-19 18:29:16.199131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.199228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.199607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.199641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.200002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.200035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.200386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.200420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.200829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.200861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.201227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.201261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.201541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.201572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.201928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.201959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.202333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.202366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.202719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.202758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.202921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.202950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.203236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.203270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.203611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.203643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.204007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.204037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.204402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.204435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.204843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.204873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.205241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.205273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.205539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.205569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.205942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.205974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.206230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.206262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.206628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.206658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.935 [2024-11-19 18:29:16.207026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.935 [2024-11-19 18:29:16.207058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.935 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.207420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.207452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.207838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.207870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.208271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.208305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.208720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.208752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.209023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.209056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.209355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.209388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.209746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.209777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.210022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.210053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.210445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.210479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.210845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.210877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.211250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.211283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.211551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.211581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.211951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.211983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.212346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.212378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.212754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.212786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.213151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.213196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.213570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.213602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.213949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.213982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.214338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.214370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.214736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.214769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.215133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.215175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.215535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.215565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.215936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.215966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.216344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.216377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.216696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.216726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.217006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.217037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.217512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.217545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.217895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.217927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.218226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.218258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.218619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.218651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.219029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.219061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.219285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.219318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.936 [2024-11-19 18:29:16.219704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.936 [2024-11-19 18:29:16.219735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.936 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.220096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.220127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.220499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.220533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.220921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.220952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.221413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.221446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.221691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.221722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.222084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.222115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.222306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.222339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.222708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.222738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.223086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.223119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.223421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.223453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.223812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.223843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.224225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.224257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.224637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.224668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.224904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.224934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.225222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.225254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.225630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.225662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.226036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.226068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.226434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.226466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.226821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.226852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.227260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.227291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.227681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.227713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.228074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.228112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.228473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.228504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.228828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.228860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.229211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.229244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.229590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.229620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.229978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.230010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.230353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.230385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.230766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.230797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.231112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.231142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.231395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.231426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.231778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.231810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.232133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.232173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.232540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.232573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.232910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.232942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.233282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.233313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.233678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.937 [2024-11-19 18:29:16.233708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.937 qpair failed and we were unable to recover it. 00:30:14.937 [2024-11-19 18:29:16.234038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.234068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.234301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.234332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.234680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.234709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.235052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.235083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.235442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.235472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.235818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.235850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.236194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.236225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.236567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.236597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.236942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.236972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.237331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.237362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.237696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.237726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.238045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.238076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.238411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.238443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.238803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.238834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.239177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.239210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.239581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.239610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.239975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.240006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.240336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.240367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.240726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.240755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.241092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.241122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.241483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.241515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.241870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.241900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.242270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.242302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.242642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.242673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.243029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.243065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.243431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.243463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.243792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.243821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.244174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.244206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.244554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.244586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.244921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.244951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.245291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.245323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.245709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.245739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.246074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.246104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.246340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.246372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.246642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.246675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.247006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.247036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.247368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.247399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.938 [2024-11-19 18:29:16.247749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.938 [2024-11-19 18:29:16.247779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.938 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.248124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.248155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.248505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.248535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.248889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.248919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.249270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.249302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.250182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.250228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.250604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.250639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.250998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.251028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.251377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.251409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.251763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.251793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.252067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.252097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.252436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.252467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.252781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.252811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.253165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.253197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.253479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.253509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.253862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.253893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.254223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.254254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.254592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.254622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.254944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.254974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.255351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.255690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.255720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.256075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.256104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.256428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.256469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.256843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.256872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.257234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.257265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.257605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.257636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.257968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.257997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.258220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.258260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.258583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.258614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.258954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.258984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.259303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.259334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.259682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.259712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.260038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.260068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.260397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.260428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.260774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.260803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.261146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.261184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.261484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.261513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.261853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.261883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.939 [2024-11-19 18:29:16.262245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.939 [2024-11-19 18:29:16.262276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.939 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.262623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.262654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.262981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.263010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.263170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.263203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.263601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.263632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.263961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.263991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.264328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.264361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.264681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.264711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.265050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.265080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.265294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.265324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.265651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.265680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.266036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.266065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.266414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.266446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.266803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.266833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.267165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.267195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.267532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.267563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.267864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.267895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.268183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.268215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.268545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.268575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.268902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.268931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.269290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.269321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.269667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.269697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.270011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.270043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.270383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.270414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.270766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.270797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.271106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.271137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.271464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.271494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.271834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.271865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.272201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.272233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.272600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.272635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.272982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.273012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.273332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.273363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.273721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.273750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.274078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.274108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.274512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.274544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.274876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.274907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.275252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.275283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.275628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.940 [2024-11-19 18:29:16.275657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.940 qpair failed and we were unable to recover it. 00:30:14.940 [2024-11-19 18:29:16.275987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.276017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.276372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.276403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.276755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.276785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.277118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.277147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.277468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.277499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.277861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.277890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.278226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.278256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.278623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.278653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.279019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.279048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.279390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.279420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.279682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.279712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.280059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.280089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.280414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.280446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.280765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.280794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.281038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.281067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.281453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.281484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.281818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.281848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.282226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.282258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.282604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.282635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.282972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.283002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.283335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.283367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.283588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.283621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.283959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.283990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.284325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.284356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.284698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.284727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.285070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.285100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.285461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.285493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.285840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.285870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.286217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.286248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.286583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.286613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.286961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.286991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.287411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.287458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.287762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.287792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.941 qpair failed and we were unable to recover it. 00:30:14.941 [2024-11-19 18:29:16.288134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.941 [2024-11-19 18:29:16.288173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.288516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.288546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.288751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.288783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.289117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.289148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.289393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.289423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.289719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.289747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.290084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.290113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.290524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.290555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.290870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.290900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.291233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.291264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.291638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.291668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.292026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.292056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.292408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.292440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.292776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.292806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.293155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.293195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.293526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.293555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.293882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.293912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.294270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.294301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.294641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.294671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.295014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.295044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.295376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.295407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.295741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.295772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.296118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.296148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.296505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.296536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.296849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.296879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.297214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.297246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.297632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.297662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.298000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.298030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.298394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.298425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.298817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.298847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.299148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.299189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.299539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.299569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.299916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.299946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.300290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.300321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.300655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.300684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.301010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.301040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.301353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.301384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.301725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.942 [2024-11-19 18:29:16.301755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.942 qpair failed and we were unable to recover it. 00:30:14.942 [2024-11-19 18:29:16.302107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.302142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.302477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.302508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.302829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.302859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.303210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.303240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.303580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.303609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.303947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.303977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.304290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.304320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.304664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.304692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.305099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.305129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.305497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.305528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.305880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.305908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.306260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.306291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.306652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.306683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.307037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.307067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.307410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.307441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.307792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.307823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.308171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.308202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.308542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.308572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.308934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.308965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.309306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.309337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.309675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.309705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.310068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.310097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.310423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.310454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.310802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.310832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.311189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.311219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.311562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.311591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.312016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.312046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.312369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.312400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.312744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.312774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.313111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.313141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.313417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.313448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.313787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.313818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.314155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.314196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.314412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.314442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.314799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.314828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.315174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.315205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.315587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.943 [2024-11-19 18:29:16.315617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.943 qpair failed and we were unable to recover it. 00:30:14.943 [2024-11-19 18:29:16.315950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.315979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.316313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.316345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.316692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.316721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.317063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.317099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.317430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.317461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.317787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.317817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.318197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.318229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.318553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.318584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.318971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.319000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.319331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.319361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.319711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.319740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.320052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.320081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.320424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.320454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.320786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.320816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.321185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.321216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.321564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.321593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.321932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.321962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.322281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.322313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.322665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.322694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.323044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.323074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.323405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.323435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.323778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.323808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.324147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.324188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.324538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.324566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.324880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.324910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.325218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.325248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.325547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.325576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.325900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.325930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.326304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.326334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.326699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.326728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.327068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.327099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.327456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.327487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.327847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.327877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.328216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.328247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.328601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.328631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.328869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.328899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.329231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.329282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.329612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.944 [2024-11-19 18:29:16.329641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.944 qpair failed and we were unable to recover it. 00:30:14.944 [2024-11-19 18:29:16.329998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.330027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.330380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.330411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.330748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.330777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.331134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.331171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.331513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.331543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.331937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.331972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.332286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.332318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.332665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.332694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.333053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.333082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.333421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.333452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.333790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.333821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.334178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.334208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.334590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.334620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.334949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.334980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.335309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.335340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.335686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.335716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.336058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.336088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.336442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.336472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.336813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.336842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.337187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.337220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.337568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.337598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.337928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.337957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.338306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.338337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.338682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.338713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.339073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.339103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.339429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.339461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.339820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.339850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.340194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.340224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.340568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.340598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.340847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.340880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.341197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.341229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.341579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.341609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.341944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.341975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.945 qpair failed and we were unable to recover it. 00:30:14.945 [2024-11-19 18:29:16.342308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.945 [2024-11-19 18:29:16.342339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.342683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.342713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.343060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.343090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.343444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.343475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.343817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.343847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.344192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.344223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.344582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.344611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.344959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.344989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.345239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.345270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.345649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.345679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.345996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.346026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.346263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.346297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.346666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.346703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.347041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.347071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.347411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.347441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.347798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.347827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.348108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.348138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.348484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.348516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.348827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.348857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.349172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.349203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.349591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.349621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.349934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.349964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.350304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.350336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.350671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.350701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.351047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.351077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.351412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.351444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.351773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.351802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.352148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.352186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.352542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.352572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.352914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.352945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.353267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.353298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.353652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.353682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.354024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.354053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.354461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.354493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.354815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.354844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.355194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.355226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.355571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.355601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.355925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.946 [2024-11-19 18:29:16.355955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.946 qpair failed and we were unable to recover it. 00:30:14.946 [2024-11-19 18:29:16.356294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.356325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.356625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.356656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.357011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.357041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.357284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.357317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.357690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.357719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.358054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.358084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.358412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.358444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.358768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.358798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.359151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.359189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.359521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.359551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.359858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.359889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.360233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.360263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.360617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.360647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.360970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.361001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.361373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.361416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.361739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.361770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.362151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.362193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.362514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.362543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.362900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.362929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.363257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.363289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.363637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.363667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.364008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.364038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.364368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.364399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.364707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.364737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.365063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.365092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.365455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.365485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.365823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.365853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.366199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.366230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.366597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.366626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.366969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.366998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.367330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.367363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.367672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.367701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.368044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.368074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.368424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.368454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.368801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.368830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.369175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.369206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.369549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.369578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.369936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.947 [2024-11-19 18:29:16.369965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.947 qpair failed and we were unable to recover it. 00:30:14.947 [2024-11-19 18:29:16.370293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.370324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.370657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.370687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.371044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.371073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.371414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.371447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.371698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.371730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.372031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.372062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.372434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.372465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.372788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.372818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.373175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.373206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.373538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.373568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.373948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.373977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.374207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.374240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.374585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.374616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.374959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.374989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.375326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.375357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.375695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.375724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.376090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.376126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:14.948 [2024-11-19 18:29:16.376482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.948 [2024-11-19 18:29:16.376513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:14.948 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.376842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.376872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.377224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.377254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.377627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.377656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.377997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.378026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.378371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.378401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.378760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.378789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.379139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.379179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.379512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.379541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.379903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.379933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.380263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.380295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.380623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.380652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.381000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.381030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.381373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.381404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.381745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.381776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.382094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.382124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.382465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.382496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.382839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.382868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.383221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.383251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.383639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.383669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.383888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.383920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.384276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.384307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.384643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.384673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.385023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.385053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.385390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.385421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.385759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.385790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.386130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.386176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.386559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.386590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.386921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.386951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.387303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.387334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.387661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.387691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.388033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.388062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.388406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.388437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.221 qpair failed and we were unable to recover it. 00:30:15.221 [2024-11-19 18:29:16.388752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.221 [2024-11-19 18:29:16.388782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.389123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.389152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.389429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.389459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.389823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.389852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.390259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.390289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.390613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.390643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.390993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.391028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.391372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.391403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.391743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.391773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.392128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.392157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.392382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.392414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.392739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.392770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.393126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.393156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.393494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.393524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.393881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.393911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.394302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.394335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.394659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.394689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.395033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.395064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.395430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.395462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.395798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.395828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.396170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.396201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.396565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.396595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.396914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.396943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.397290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.397320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.397654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.397685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.398039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.398068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.398407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.398439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.398674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.398707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.399037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.399067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.399418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.399449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.399808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.399838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.400171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.400202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.400539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.400568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.400940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.400971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.401303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.401335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.401677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.401707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.402064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.222 [2024-11-19 18:29:16.402093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.222 qpair failed and we were unable to recover it. 00:30:15.222 [2024-11-19 18:29:16.402421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.402454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.402815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.402844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.403152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.403192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.403533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.403563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.403904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.403934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.404278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.404308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.404651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.404680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.405020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.405049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.405382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.405414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.405773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.405803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.406146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.406201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.406584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.406614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.406844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.406872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.407202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.407234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.407612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.407642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.407980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.408009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.408370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.408401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.408714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.408745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.409101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.409130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.409481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.409511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.409833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.409863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.410204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.410236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.410601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.410630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.410945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.410974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.411309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.411340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.411694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.411724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.412082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.412112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.412432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.412462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.412792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.412822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.413144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.413180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.413540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.413569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.413912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.413941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.414273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.414305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.414638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.414668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.414992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.415021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.415372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.415403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.415752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.415789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.416196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.223 [2024-11-19 18:29:16.416227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.223 qpair failed and we were unable to recover it. 00:30:15.223 [2024-11-19 18:29:16.416571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.416599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.416830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.416863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.417262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.417293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.417610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.417640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.417990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.418019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.418380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.418411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.418766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.418796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.419120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.419148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.419498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.419528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.419894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.419923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.420270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.420300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.420646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.420676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.421035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.421065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.421402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.421432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.421773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.421803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.422119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.422149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.422591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.422622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.422977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.423007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.423344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.423374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.423724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.423753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.424092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.424121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.424476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.424507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.424836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.424866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.425208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.425240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.425588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.425618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.425958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.425987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.426325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.426355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.426746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.426776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.427108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.427138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.427503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.427534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.427875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.427905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.428240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.428271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.428610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.428640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.428969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.428999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.429327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.429359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.429708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.429739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.430088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.224 [2024-11-19 18:29:16.430118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.224 qpair failed and we were unable to recover it. 00:30:15.224 [2024-11-19 18:29:16.430453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.430484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.430823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.430859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.431181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.431213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.431562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.431593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.431936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.431966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.432329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.432360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.432702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.432732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.433082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.433112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.433443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.433475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.433787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.433818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.434149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.434198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.434539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.434569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.434897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.434928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.435261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.435294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.435670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.435700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.436037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.436068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.436488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.436520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.436875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.436905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.437225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.437256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.437590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.437621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.437972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.438001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.438327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.438358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.438683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.438714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.439052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.439082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.439465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.439497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.439830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.439860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.440205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.440235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.440621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.440651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.440983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.441016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.225 [2024-11-19 18:29:16.441333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.225 [2024-11-19 18:29:16.441363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.225 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.441718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.441749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.442094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.442125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.442476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.442507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.442829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.442859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.443183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.443213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.443569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.443598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.443927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.443957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.444316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.444348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.444723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.444753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.445082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.445113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.445466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.445495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.445733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.445772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.446114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.446144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.446487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.446517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.446834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.446864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.447196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.447228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.447587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.447618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.447837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.447869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.448261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.448291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.448627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.448657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.449000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.449031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.449372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.449402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.449739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.449769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.450111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.450140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.450483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.450514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.450752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.450785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.450999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.451029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.451361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.451393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.451732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.451762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.452122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.452151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.452489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.452520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.452872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.452902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.453223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.453253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.453618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.453648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.453988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.454018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.454347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.454377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.454622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.454651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.226 qpair failed and we were unable to recover it. 00:30:15.226 [2024-11-19 18:29:16.454982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.226 [2024-11-19 18:29:16.455013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.455340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.455371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.455708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.455739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.456081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.456114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.456455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.456485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.456832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.456861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.456983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.457015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.457334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.457366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.457772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.457801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.458022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.458054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.458374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.458405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.458743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.458772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.459040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.459071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.459391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.459423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.459750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.459787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.460138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.460175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.460532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.460561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.460896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.460925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.461270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.461302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.461669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.461697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.462020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.462051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.462392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.462424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.462778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.462809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.463136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.463178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.463514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.463544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.463772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.463804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.464023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.464054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.464397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.464428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.464788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.464818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.465141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.465180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.465547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.465576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.465913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.465942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.466264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.466294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.466631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.466661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.467020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.467050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.467369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.467401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.467734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.467763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.468128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.227 [2024-11-19 18:29:16.468157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.227 qpair failed and we were unable to recover it. 00:30:15.227 [2024-11-19 18:29:16.468465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.468495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.468844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.468874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.469237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.469268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.469598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.469627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.469969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.469999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.470352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.470383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.470723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.470753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.471095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.471124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.471443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.471474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.471814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.471843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.472189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.472220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.472573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.472604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.472952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.472982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.473317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.473348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.473717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.473746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.474078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.474108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.474482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.474518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.474866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.474896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.475227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.475259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.475657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.475687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.476035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.476064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.476415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.476446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.476791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.476821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.477178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.477209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.477555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.477585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.477927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.477958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.478318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.478349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.478707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.478737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.479086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.479115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.479478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.479509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.479855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.479885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.480226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.480256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.480632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.480662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.480987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.481017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.481378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.481409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.481718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.481746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.482088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.482117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.228 [2024-11-19 18:29:16.482469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.228 [2024-11-19 18:29:16.482500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.228 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.482859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.482888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.483231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.483263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.483607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.483638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.483999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.484028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.484385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.484415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.484754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.484785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.485098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.485129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.485504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.485535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.485864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.485893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.486248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.486279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.486635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.486664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.487006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.487037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.487368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.487400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.487717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.487746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.488099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.488129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.488483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.488514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.488895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.488925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.489268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.489298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.489632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.489668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.490008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.490038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.490390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.490421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.490745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.490776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.491105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.491136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.491472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.491503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.491847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.491877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.492232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.492263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.492618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.492647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.493007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.493037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.493383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.493414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.493758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.493788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.494143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.494181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.494518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.494547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.494892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.494922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.495256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.495288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.495622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.495651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.495999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.496029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.496369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.496399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.229 [2024-11-19 18:29:16.496737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.229 [2024-11-19 18:29:16.496766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.229 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.497090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.497121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.497522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.497552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.497865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.497894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.498230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.498262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.498622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.498653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.498983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.499013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.499366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.499398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.499758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.499788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.500128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.500157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.500510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.500540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.500899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.500929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.501268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.501299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.501635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.501665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.501999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.502029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.502259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.502290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.502630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.502660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.503009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.503038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.503379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.503410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.503737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.503767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.504126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.504155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.504497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.504533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.504865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.504894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.505224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.505256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.505608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.505637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.505970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.505999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.506327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.506358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.506689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.506718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.507065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.507094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.507455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.507486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.507837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.507867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.508212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.508242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.508622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.508652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.230 [2024-11-19 18:29:16.508868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.230 [2024-11-19 18:29:16.508899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.230 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.509225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.509255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.509577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.509607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.509938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.509967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.510310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.510341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.510683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.510712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.511044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.511073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.511416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.511448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.511775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.511805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.512138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.512188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.512501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.512531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.512768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.512801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.513153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.513191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.513569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.513600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.513897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.513927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.514261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.514293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.514636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.514665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.515025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.515054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.515409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.515441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.515648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.515680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.516058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.516088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.516316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.516350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.516717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.516748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.517102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.517132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.517532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.517562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.517899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.517930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.518266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.518297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.518635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.518664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.518994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.519029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.519368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.519401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.519725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.519754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.520102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.520132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.520496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.520527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.520853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.520883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.521225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.521256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.521637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.521666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.521996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.522027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.522395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.231 [2024-11-19 18:29:16.522424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-11-19 18:29:16.522788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.522817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.523167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.523200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.523540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.523570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.523888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.523918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.524262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.524293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.524634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.524663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.524983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.525014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.525370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.525401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.525738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.525768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.526088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.526119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.526470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.526501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.526741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.526772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.527104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.527135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.527494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.527524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.527863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.527893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.528123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.528155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.528515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.528546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.528889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.528920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.529173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.529204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.529539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.529569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.529906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.529935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.530275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.530307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.530638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.530668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.531010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.531039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.531374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.531406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.531745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.531775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.532137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.532175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.532521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.532551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.532894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.532924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.533285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.533315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.533638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.533675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.534009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.534039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.534395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.534427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.534772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.534802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.535143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.535192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.535521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.535550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.535908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.535938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-11-19 18:29:16.536273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.232 [2024-11-19 18:29:16.536305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.536661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.536690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.537007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.537037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.537351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.537382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.537606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.537635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.537874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.537907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.538172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.538203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.538552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.538582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.538901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.538931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.539267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.539299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.539620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.539649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.540000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.540030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.540395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.540426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.540767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.540797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.541186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.541218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.541570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.541601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.541828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.541858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.542264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.542295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.542634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.542663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.543006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.543035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.543367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.543399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.543741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.543771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.544100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.544130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.544383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.544413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.544784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.544812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.545149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.545190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.545573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.545603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.545986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.546016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.546238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.546271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.546618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.546648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.546980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.547010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.547371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.547403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.547762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.547791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.548124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.548168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.548499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.548529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.548860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.548890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.549246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.549277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.549633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.233 [2024-11-19 18:29:16.549663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-11-19 18:29:16.550025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.550054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.550396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.550427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.550746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.550776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.551091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.551120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.551466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.551497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.551835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.551865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.552228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.552259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.552595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.552624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.552972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.553002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.553375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.553406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.553728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.553757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.554094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.554123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.554487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.554518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.554862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.554891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.555227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.555258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.555600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.555631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.555975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.556004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.556370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.556401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.556761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.556790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.557126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.557156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.557484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.557514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.557865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.557895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.558224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.558256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.558614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.558644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.558967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.558997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.559327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.559357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.559703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.559733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.560062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.560091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.560441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.560472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.560819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.560849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.561216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.561246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.561584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.561614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.561952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.561982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.562315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.562345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.562699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.562728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.563073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.234 [2024-11-19 18:29:16.563109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.234 qpair failed and we were unable to recover it. 00:30:15.234 [2024-11-19 18:29:16.563471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.563503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.563849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.563879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.564234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.564264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.564602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.564632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.564972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.565002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.565327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.565358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.565589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.565623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.565961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.565991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.566336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.566367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.566734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.566764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.567113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.567142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.567533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.567563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.567906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.567935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.568273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.568305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.568652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.568682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.569043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.569072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.569411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.569441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.569780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.569811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.570174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.570205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.570561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.570590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.570936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.570965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.571282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.571315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.571647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.571676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.571908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.571936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.572300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.572332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.572674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.572704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.573037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.573067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.573400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.573432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.573778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.573808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.574175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.574206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.574601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.574631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.574965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.574995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.575371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.575402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.575708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.575739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.235 [2024-11-19 18:29:16.576117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.235 qpair failed and we were unable to recover it. 00:30:15.235 [2024-11-19 18:29:16.576447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.576478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.576841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.576871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.577212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.577243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.577582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.577611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.577971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.578006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.578232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.578266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.578598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.578628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.578947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.578976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.579327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.579359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.579691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.579720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.580068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.580097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.580327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.580359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.580688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.580718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.581027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.581056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.581399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.581431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.581770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.581800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.582122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.582152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.582516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.582546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.582892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.582922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.583287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.583319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.583649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.583679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.584003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.584032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.584355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.584388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.584724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.584754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.585110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.585140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.585486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.585517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.585857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.585888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.586225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.586256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.586626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.586656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.586996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.587025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.587370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.587401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.587739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.587769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.588107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.588137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.588530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.588560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.588869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.588898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.589245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.589277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.589610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.589639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.236 [2024-11-19 18:29:16.589964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.236 [2024-11-19 18:29:16.589994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.236 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.590333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.590365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.590718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.590748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.591105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.591135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.591481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.591512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.591850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.591880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.592231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.592262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.592612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.592641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.592992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.593022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.593367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.593397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.593754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.593783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.594128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.594157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.594514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.594544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.594892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.594922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.595265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.595296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.595654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.595684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.596022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.596053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.596385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.596416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.596774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.596804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.597131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.597167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.597505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.597534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.597860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.597891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.598221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.598252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.598592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.598622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.598949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.598978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.599324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.599355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.599686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.599716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.600026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.600056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.600398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.600428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.600767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.600798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.601167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.601198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.601537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.601566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.601912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.601941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.602180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.602213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.602548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.602583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.602910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.602940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.603269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.603302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.603659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.603688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.237 [2024-11-19 18:29:16.604022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.237 [2024-11-19 18:29:16.604051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.237 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.604383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.604414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.604763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.604792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.605134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.605170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.605516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.605545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.605896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.605926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.606144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.606185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.606556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.606585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.606930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.606960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.607309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.607341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.607721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.607752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.608074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.608104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.608441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.608471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.608827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.608857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.609166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.609197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.609581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.609610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.609952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.609982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.610325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.610356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.610707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.610736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.611087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.611116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.611465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.611497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.611838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.611868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.612223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.612254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.612591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.612622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.612964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.612993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.613355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.613385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.613720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.613751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.614133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.614184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.614550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.614580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.614931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.614961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.615315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.615345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.615692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.615722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.616045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.616076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.616411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.616443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.616803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.616832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.617170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.617202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.617561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.617597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.238 [2024-11-19 18:29:16.617909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.238 [2024-11-19 18:29:16.617939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.238 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.618289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.618319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.618673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.618703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.619016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.619045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.619381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.619413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.619730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.619759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.620000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.620028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.620451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.620482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.620810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.620840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.621214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.621245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.621587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.621616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.621969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.621998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.622323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.622355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.622703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.622733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.623073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.623103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.623423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.623454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.623794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.623824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.624152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.624193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.624536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.624566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.624895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.624924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.625267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.625300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.625695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.625724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.626053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.626083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.626421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.626452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.626796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.626825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.627157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.627195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.627598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.627628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.627978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.628008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.628370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.628402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.628743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.628772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.629123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.629151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.629496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.629527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.629849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.629879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.630191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.630230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.239 [2024-11-19 18:29:16.630595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.239 [2024-11-19 18:29:16.630624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.239 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.630953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.630982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.631343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.631374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.631722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.631751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.632097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.632127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.632466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.632503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.632836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.632866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.633197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.633229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.633610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.633639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.633969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.633998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.634333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.634364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.634700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.634729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.635077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.635105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.635456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.635486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.635812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.635842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.636180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.636211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.636560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.636589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.636955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.636984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.637310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.637341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.637697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.637727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.638051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.638081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.638429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.638461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.638829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.638860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.639182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.639212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.639581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.639610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.639864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.639893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.640239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.640269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.640658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.640689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.641093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.641122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.641336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.641382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.641731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.641761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.642120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.642149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.642580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.642611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.643009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.643038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.643273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.643304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.643672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.643701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.644046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.644076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.644443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.644474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.240 qpair failed and we were unable to recover it. 00:30:15.240 [2024-11-19 18:29:16.644793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.240 [2024-11-19 18:29:16.644823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.645172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.645203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.645563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.645592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.645936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.645966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.646310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.646340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.646674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.646703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.647061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.647091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.647418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.647455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.647811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.647841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.648205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.648243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.648591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.648621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.648965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.648995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.649345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.649376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.649731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.649761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.650100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.650130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.650478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.650508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.650922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.650952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.651287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.651319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.651671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.651701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.651912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.651941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.652287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.652317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.652690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.652720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.653092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.653122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.653438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.653470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.653832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.653861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.654188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.654218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.654546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.654576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.654948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.654977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.655303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.655334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.655662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.655693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.656050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.656080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.656437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.656467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.656808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.656838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.657168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.657200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.657537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.657566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.657921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.657950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.658176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.658209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.658523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.241 [2024-11-19 18:29:16.658552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.241 qpair failed and we were unable to recover it. 00:30:15.241 [2024-11-19 18:29:16.658898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.658928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.659269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.659301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.659522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.659551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.659889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.659919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.660231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.660261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.660598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.660628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.660972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.661001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.661350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.661381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.661718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.661749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.661973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.662013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.662339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.662370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.662721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.662751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.663134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.663181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.663517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.663547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.663896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.663925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.664278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.664309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.664655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.664686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.664912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.664942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.665302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.665332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.665682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.665712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.665940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.665972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.666327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.666357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.666722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.666751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.667081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.667112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.667445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.667476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.667875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.667905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.668175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.668205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.668544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.668574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.668797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.668828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.669179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.669210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.669561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.669591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.669953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.669983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.670330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.670360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.670694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.670725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.670951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.670984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.671316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.671347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.671692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.671723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.242 [2024-11-19 18:29:16.672074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.242 [2024-11-19 18:29:16.672103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.242 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.672444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.672475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.672721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.672750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.673112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.673142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.673564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.673594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.673928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.673958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.674310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.674341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.674700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.674730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.675056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.675086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.675406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.675437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.675791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.675822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.676221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.676252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.243 [2024-11-19 18:29:16.676540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.243 [2024-11-19 18:29:16.676575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.243 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.676895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.676926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.677259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.677290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.677660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.677690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.678032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.678063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.678413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.678445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.678799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.678829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.679167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.679197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.679534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.679564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.679927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.679955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.680302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.680333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.680672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.680702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.680941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.680971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.681210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.681240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.681605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.681636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.681998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.682029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.682339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.682369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.682693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.682723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.683039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.683069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.683415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.683446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.683783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.683813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.684180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.684211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.684550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.684580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.684931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.684961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.685335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.516 [2024-11-19 18:29:16.685366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.516 qpair failed and we were unable to recover it. 00:30:15.516 [2024-11-19 18:29:16.685685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.685715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.686054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.686085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.686331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.686363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.686704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.686734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.687083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.687112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.687431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.687461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.687798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.687828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.690595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.690683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.691095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.691133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.691506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.691538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.691877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.691908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.692319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.692350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.692677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.692707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.693069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.693099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.693350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.693381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.693720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.693762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.694094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.694125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.694480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.694511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.694871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.694900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.695251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.695284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.695541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.695580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.695962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.695993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.696322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.696354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.696700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.696730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.697073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.697103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.697505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.697538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.697867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.697897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-11-19 18:29:16.698237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.517 [2024-11-19 18:29:16.698269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.698611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.698639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.698997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.699028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.699378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.699410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.699758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.699788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.700127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.700156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.700534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.700564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.700909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.700941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.701286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.701316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.701702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.701731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.702042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.702072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.702406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.702437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.702767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.702797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.703131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.703172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.703470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.703500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.703841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.703870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.704223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.704254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.704633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.704664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.704985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.705013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.705370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.705401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.705731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.705760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.706101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.706131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.706449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.706480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.706718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.706747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.707082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.707112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.707496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.707527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-11-19 18:29:16.707852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.518 [2024-11-19 18:29:16.707882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.708229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.708261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.708655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.708690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.709026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.709055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.709482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.709512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.709846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.709876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.710220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.710251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.710510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.710538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.710895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.710925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.711266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.711297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.711647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.711677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.712015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.712044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.712393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.712425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.712765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.712793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.713149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.713188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.713575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.713605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.713914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.713943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.714155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.714209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.714545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.714575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.714988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.715018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.715367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.715398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.715745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.715775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.716120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.716153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.716519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.716548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.716903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.716934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.717278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.717310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.519 [2024-11-19 18:29:16.717665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.519 [2024-11-19 18:29:16.717693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.519 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.718036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.718066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.718423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.718454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.718800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.718830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.719057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.719087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.719439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.719470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.719810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.719840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.720149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.720187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.720542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.720572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.720927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.720956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.721298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.721329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.721672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.721701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.722039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.722068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.722430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.722461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.722793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.722821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.723199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.723229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.723576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.723611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.723960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.723990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.724325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.724355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.724705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.724734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.725075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.725105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.725451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.725482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.725814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.725843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.726179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.726209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.726536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.726566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.726913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.726943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.727288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.727319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.727669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.727699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.728050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.728080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.728420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.728451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.520 [2024-11-19 18:29:16.728787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.520 [2024-11-19 18:29:16.728817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.520 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.729167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.729199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.729545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.729574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.729916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.729946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.730298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.730330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.730672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.730701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.731058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.731089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.731442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.731473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.731819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.731849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.732203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.732233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.732577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.732608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.732944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.732974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.733327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.733358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.733772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.733804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.734179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.734210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.734545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.734574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.734913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.734944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.735272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.735303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.735645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.735675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.736019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.736048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.736387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.736417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.736761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.736790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.737133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.737170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.737504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.737533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.737895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.737925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.738272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.738302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.738656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.738692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.739022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.739052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.739396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.739428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.739769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.739798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.740177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.740207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.740585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.740614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.740946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.740975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.741324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.741354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.741678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.741707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.521 [2024-11-19 18:29:16.742058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.521 [2024-11-19 18:29:16.742087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.521 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.742396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.742426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.742766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.742795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.743136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.743174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.743508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.743538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.743884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.743914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.744255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.744286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.744621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.744652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.745004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.745033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.745400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.745430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.745768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.745797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.746126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.746157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.746403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.746432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.746768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.746797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.747134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.747185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.747504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.747534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.747928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.747957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.748293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.748325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.748665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.748696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.749038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.749069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.749312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.749343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.749672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.749702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.750035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.750065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.522 [2024-11-19 18:29:16.750394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.522 [2024-11-19 18:29:16.750425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.522 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.750762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.750792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.751143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.751181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.751589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.751618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.752019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.752050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.752324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.752357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.752711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.752739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.753092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.753122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.753505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.753542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.753865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.753895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.754222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.754254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.754608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.754637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.754978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.755008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.755370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.755400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.755745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.755775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.756113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.756143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.756502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.756533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.756863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.756891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.757234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.757265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.757615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.757644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.757993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.758023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.758358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.758388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.758732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.758762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.759104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.759133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.759494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.759525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.759875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.759905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.760258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.760288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.760507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.760539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.760878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.760908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.761137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.523 [2024-11-19 18:29:16.761176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.523 qpair failed and we were unable to recover it. 00:30:15.523 [2024-11-19 18:29:16.761511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.761541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.761891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.761921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.762247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.762278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.762632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.762661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.763002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.763031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.763445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.763477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.763814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.763843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.764195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.764226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.764580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.764610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.764950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.764980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.765329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.765359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.765586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.765615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.765863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.765892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.766214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.766245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.766672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.766702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.767041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.767071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.767427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.767458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.767813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.767843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.768204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.768240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.768496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.768524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.768751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.768781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.769166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.769197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.769532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.769561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.769903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.769933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.770273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.770306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.770679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.770709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.771035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.771064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.771397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.771427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.771748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.771777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.772106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.772136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.772457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.772487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.772805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.772835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.524 [2024-11-19 18:29:16.773181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.524 [2024-11-19 18:29:16.773213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.524 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.773593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.773622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.773923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.773952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.774279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.774310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.774648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.774676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.775019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.775049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.775387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.775419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.775761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.775791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.776140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.776180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.776502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.776531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.776768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.776801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.777137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.777173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.777541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.777571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.777919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.777949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.778232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.778264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.778486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.778519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.778841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.778871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.779211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.779242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.779628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.779658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.779991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.780021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.780369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.780400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.780718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.780749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.781100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.781130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.781513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.781543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.781867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.781896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.782222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.782254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.782609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.782638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.782978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.783008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.783369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.783400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.783746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.783775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.784119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.525 [2024-11-19 18:29:16.784148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.525 qpair failed and we were unable to recover it. 00:30:15.525 [2024-11-19 18:29:16.784490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.784519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.784858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.784888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.785236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.785266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.785647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.785676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.786003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.786032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.786389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.786419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.786762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.786791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.787122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.787152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.787483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.787512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.787862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.787892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.788226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.788256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.788622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.788651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.789000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.789029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.789272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.789306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.789650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.789680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.789904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.789933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.790272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.790304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.790639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.790669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.791010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.791041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.791438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.791468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.791798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.791828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.792183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.792214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.792554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.792590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.792926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.792956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.793331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.793362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.793596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.793627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.793967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.793997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.794348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.794380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.794721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.794751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.794966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.795000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.795320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.795352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.795683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.795713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.526 qpair failed and we were unable to recover it. 00:30:15.526 [2024-11-19 18:29:16.796047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.526 [2024-11-19 18:29:16.796076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.796320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.796350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.796685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.796716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.797077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.797106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.797462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.797492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.797776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.797806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.798175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.798206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.798546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.798576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.798917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.798946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.799264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.799296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.799631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.799660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.800007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.800036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.800458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.800489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.800818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.800847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.801259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.801290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.801628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.801658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.802001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.802030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.802391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.802423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.802772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.802803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.803138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.803178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.803510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.803540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.803882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.803912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.804261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.804293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.804639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.804668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.805027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.805057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.805309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.805338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.805584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.805616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.805949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.805979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.806392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.806423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.527 qpair failed and we were unable to recover it. 00:30:15.527 [2024-11-19 18:29:16.806749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.527 [2024-11-19 18:29:16.806778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.807118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.807176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.807525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.807555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.807896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.807926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.808180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.808212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.808465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.808495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.808822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.808851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.809196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.809227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.809606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.809636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.809969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.809999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.810338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.810368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.810718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.810748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.811097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.811126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.811481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.811512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.811853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.811882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.812295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.812330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.812664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.812695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.813050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.813080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.813430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.813461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.813806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.813836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.814191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.814222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.814579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.814609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.814837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.814867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.815217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.815247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.815605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.815635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.815973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.816003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.816342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.816374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.816735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.816764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.817114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.817144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.817523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.817555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.817882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.817911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.818250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.818281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.818628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.818657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.819000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.819029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.819274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.528 [2024-11-19 18:29:16.819303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.528 qpair failed and we were unable to recover it. 00:30:15.528 [2024-11-19 18:29:16.819671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.819700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.820023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.820053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.820400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.820432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.820763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.820793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.821150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.821187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.821520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.821550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.821897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.821933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.822275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.822305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.822645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.822675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.823025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.823055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.823411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.823441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.823805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.823835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.824229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.824260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.824596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.824626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.824998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.825028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.825373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.825403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.825730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.825760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.826100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.826130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.826469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.826500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.826831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.826862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.827124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.827157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.827505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.827534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.827884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.827915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.828128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.828157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.828502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.828531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.828852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.828882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.829213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.829246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.829627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.829656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.829986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.830015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.830375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.830405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.830733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.529 [2024-11-19 18:29:16.830762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.529 qpair failed and we were unable to recover it. 00:30:15.529 [2024-11-19 18:29:16.831089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.831118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.831474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.831505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.831840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.831870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.832196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.832227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.832591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.832620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.832965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.832995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.833324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.833356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.833700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.833730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.834077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.834107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.834451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.834481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.834841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.834870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.835272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.835303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.835649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.835679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.836024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.836054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.836395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.836426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.836763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.836798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.837137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.837176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.837393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.837425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.837774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.837803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.838152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.838194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.838534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.838563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.838917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.838947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.839325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.839356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.839701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.839732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.840071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.840100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.840465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.840496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.840849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.840878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.841224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.841255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.841629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.841659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.842015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.842046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.842383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.842414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.842776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.842806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.843169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.843200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.530 [2024-11-19 18:29:16.843543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.530 [2024-11-19 18:29:16.843572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.530 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.843912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.843942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.844293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.844325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.844714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.844744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.844969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.844997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.845320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.845351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.845701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.845731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.846072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.846101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.846453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.846483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.846814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.846845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.847175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.847205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.847559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.847588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.847930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.847959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.848301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.848332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.848671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.848701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.849046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.849076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.849421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.849451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.849770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.849800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.850114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.850144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.850475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.850505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.850844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.850874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.851206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.851236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.851470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.851505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.851852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.851881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.852227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.852259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.852609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.852639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.852984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.853014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.853328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.853358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.853711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.853741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.854062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.854092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.854428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.531 [2024-11-19 18:29:16.854458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.531 qpair failed and we were unable to recover it. 00:30:15.531 [2024-11-19 18:29:16.854785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.854813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.855157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.855197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.855530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.855560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.855903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.855932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.856286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.856318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.856674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.856704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.857044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.857074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.857417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.857448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.857785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.857815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.858173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.858204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.858553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.858582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.858922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.858950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.859296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.859327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.859691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.859721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.860059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.860088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.860433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.860464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.860769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.860800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.861127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.861166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.861507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.861538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.861874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.861903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.862254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.862284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.862620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.862650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.862979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.863008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.863343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.863372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.863720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.863749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.864089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.864119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.864473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.864505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.864847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.864876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.865221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.865252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.865572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.532 [2024-11-19 18:29:16.865603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.532 qpair failed and we were unable to recover it. 00:30:15.532 [2024-11-19 18:29:16.865963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.865992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.866337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.866374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.866728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.866758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.867091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.867121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.867466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.867496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.867848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.867878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.868110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.868139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.868480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.868511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.868852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.868883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.869234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.869265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.869618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.869648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.869992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.870021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.870373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.870403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.870746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.870775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.871107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.871136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.871383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.871414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.871771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.871801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.872146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.872192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.872531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.872561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.872894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.872924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.873272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.873304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.873652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.873682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.874023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.874053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.874408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.874438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.874781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.874810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.875152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.875188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.875439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.875468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.875818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.875848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.876189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.876220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.876559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.876589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.876931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.533 [2024-11-19 18:29:16.876962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.533 qpair failed and we were unable to recover it. 00:30:15.533 [2024-11-19 18:29:16.877308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.877339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.877688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.877718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.878071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.878102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.878464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.878496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.878812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.878842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.879181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.879211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.879572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.879601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.879947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.879976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.880318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.880349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.880588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.880617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.880951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.880991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.881364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.881395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.881731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.881761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.882108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.882137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.882371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.882404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.882615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.882645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.882811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.882839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.883198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.883229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.883592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.883622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.883961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.883990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.884325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.884357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.884702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.884733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.885154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.885197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.885547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.885577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.885965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.885995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.886325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.886356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.886706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.886736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.887058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.887089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.887453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.887483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.887823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.887853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.888196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.888227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.888587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.888617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.534 [2024-11-19 18:29:16.888953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.534 [2024-11-19 18:29:16.888983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.534 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.889321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.889351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.889699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.889729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.890081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.890110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.890340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.890370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.890704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.890735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.891034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.891064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.891405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.891436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.891774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.891803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.892147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.892185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.892536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.892566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.892909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.892938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.893281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.893312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.893655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.893686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.894022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.894051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.894390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.894420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.894772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.894802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.895177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.895207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.895553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.895588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.895935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.895964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.896378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.896409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.896738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.896767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.897107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.897137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.897458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.897489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.897833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.897863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.898208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.898239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.898595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.898625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.898968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.898997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.899354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.899384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.899730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.899760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.900093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.900123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.535 [2024-11-19 18:29:16.900476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.535 [2024-11-19 18:29:16.900508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.535 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.900844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.900873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.901219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.901250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.901632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.901661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.901997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.902026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.902381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.902411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.902752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.902783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.903115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.903145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.903506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.903536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.903878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.903908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.904134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.904178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.904417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.904445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.904792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.904821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.905055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.905083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.905426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.905458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.905798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.905830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.906170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.906201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.906425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.906458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.906797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.906827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.907174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.907205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.907550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.907580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.907918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.907949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.908271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.908302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.908652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.908682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.909020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.909050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.909390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.909421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.909662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.909694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.910016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.910052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.910401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.910434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.910764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.910793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.911116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.911145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.911503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.911533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.911873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.911902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.536 qpair failed and we were unable to recover it. 00:30:15.536 [2024-11-19 18:29:16.912255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.536 [2024-11-19 18:29:16.912286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.912634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.912663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.913012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.913041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.913381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.913413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.913756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.913785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.914120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.914148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.914504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.914534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.914773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.914807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.915196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.915227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.915626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.915655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.915984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.916013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.916371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.916401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.916739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.916769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.917107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.917137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.917490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.917521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.917847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.917876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.918220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.918251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.918488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.918520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.918844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.918874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.919224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.919255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.919588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.919618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.919955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.919985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.920316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.920347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.920685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.920715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.921104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.921134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.921478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.921509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.921850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.921880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.922240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.922271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.922609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.922638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.922982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.923011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.923264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.923293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.923570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.923599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.923921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.923951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.537 qpair failed and we were unable to recover it. 00:30:15.537 [2024-11-19 18:29:16.924294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.537 [2024-11-19 18:29:16.924326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.924670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.924705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.925038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.925067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.925421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.925452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.925797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.925827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.926177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.926208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.926546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.926586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.926909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.926939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.927296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.927327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.927679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.927709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.928051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.928080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.928428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.928459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.928816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.928846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.929189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.929220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.929558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.929588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.929828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.929857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.930193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.930224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.930601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.930630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.930971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.931001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.931328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.931360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.931710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.931740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.932086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.932116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.932469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.932500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.932838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.932867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.933210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.933241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.538 qpair failed and we were unable to recover it. 00:30:15.538 [2024-11-19 18:29:16.933598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.538 [2024-11-19 18:29:16.933628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.933970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.934001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.934360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.934391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.934737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.934768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.935106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.935135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.935463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.935493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.935831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.935862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.936215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.936247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.936611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.936641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.936977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.937007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.937325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.937355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.937702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.937732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.938065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.938095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.938442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.938473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.938815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.938846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.939184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.939214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.939547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.939583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.939938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.939970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.940282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.940313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.940662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.940692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.941031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.941061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.941397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.941427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.941774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.941804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.942149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.942187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.942575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.942604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.942932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.942962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.943306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.943336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.943688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.943718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.943957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.943990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.944348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.944379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.944714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.944745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.539 [2024-11-19 18:29:16.945084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.539 [2024-11-19 18:29:16.945114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.539 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.945352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.945383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.945718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.945748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.946092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.946121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.946490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.946522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.946859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.946888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.947244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.947275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.947644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.947674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.948015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.948045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.948391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.948421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.948762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.948791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.949134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.949173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.949524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.949554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.949896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.949925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.950244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.950274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.950603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.950633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.950948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.950982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.951180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.951211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.951575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.951604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.951949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.951978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.952318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.952350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.952701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.952731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.953075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.953105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.953460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.953490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.953836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.953867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.954176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.954217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.954539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.954569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.954908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.954937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.955185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.955217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.955548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.955579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.955920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.955949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.956287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.956319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.956669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.956698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.540 [2024-11-19 18:29:16.957049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.540 [2024-11-19 18:29:16.957079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.540 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.957430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.957461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.957814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.957844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.958194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.958225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.958406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.958435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.958764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.958793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.959032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.959061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.959396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.959427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.959742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.959771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.960005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.960033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.960377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.960408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.960676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.960705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.961047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.961077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.961423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.961453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.961789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.961819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.962033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.962062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.962411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.962442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.962787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.962816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.963173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.963205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.963590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.963620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.963950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.963979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.964406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.964437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.964781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.964810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.965145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.965182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.965537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.965567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.965916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.965946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.966310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.966340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.966568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.966601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.966934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.541 [2024-11-19 18:29:16.966963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.541 qpair failed and we were unable to recover it. 00:30:15.541 [2024-11-19 18:29:16.967319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.967349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.967691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.967719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.968063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.968093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.968442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.968479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.968856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.968885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.969225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.969255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.969614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.969644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.969985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.970015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.970350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.970380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.970732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.970762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.971106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.971135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.971490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.971520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.542 [2024-11-19 18:29:16.971863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.542 [2024-11-19 18:29:16.971895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.542 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.972235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.972268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.972535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.972564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.972889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.972918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.973186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.973216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.973593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.973622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.973934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.973964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.974303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.974334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.974673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.974704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.975024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.975054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.975487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.975518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.975865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.975895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.976239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.976271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.976604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.976634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.976979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.977009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.977348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.977379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.815 [2024-11-19 18:29:16.977723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.815 [2024-11-19 18:29:16.977754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.815 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.978093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.978122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.978493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.978530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.978862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.978892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.979235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.979266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.979619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.979649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.979902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.979933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.980269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.980299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.980648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.980678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.981020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.981050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.981396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.981427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.981836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.981866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.982200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.982230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.982569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.982598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.982940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.982969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.983320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.983350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.983694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.983724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.984074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.984104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.984446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.984478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.984827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.984858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.985198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.985228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.985549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.985578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.985909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.985938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.986354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.986385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.986755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.986784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.987116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.987146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.987503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.987532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.987885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.987915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.988253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.988284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.988636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.988665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.988993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.989023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.816 [2024-11-19 18:29:16.989245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.816 [2024-11-19 18:29:16.989276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.816 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.989616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.989645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.989990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.990020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.990379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.990409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.990640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.990669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.990996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.991025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.991381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.991411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.991736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.991766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.992106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.992135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.992489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.992519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.992859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.992888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.993232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.993268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.993611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.993641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.993985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.994015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.994371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.994401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.994731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.994760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.995103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.995132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.995483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.995514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.995854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.995883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.996226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.996257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.996582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.996613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.996842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.996875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.997117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.997146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.997510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.997541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.997878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.997908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.998269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.998300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.998626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.998655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.998986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.999015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.999256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.999289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:16.999642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:16.999671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:17.000010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:17.000041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:17.000387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:17.000419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.817 qpair failed and we were unable to recover it. 00:30:15.817 [2024-11-19 18:29:17.000763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.817 [2024-11-19 18:29:17.000793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.001130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.001166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.001503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.001533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.001775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.001807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.002143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.002195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.002570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.002600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.002981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.003011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.003333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.003364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.003715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.003746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.004086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.004115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.004457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.004487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.004825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.004855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.005100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.005129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.005478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.005509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.005847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.005876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.006220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.006251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.006631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.006661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.006988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.007018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.007374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.007405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.007734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.007769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.008094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.008124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.008467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.008498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.008849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.008878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.009222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.009253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.009594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.009624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.009973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.010002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.010326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.010356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.010708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.010738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.011090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.011120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.011477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.011508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.818 qpair failed and we were unable to recover it. 00:30:15.818 [2024-11-19 18:29:17.011853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.818 [2024-11-19 18:29:17.011882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.012215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.012246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.012583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.012611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.012940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.012971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.013396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.013427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.013758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.013787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.014128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.014177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.014580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.014610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.014940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.014969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.015314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.015345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.015687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.015718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.016058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.016087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.016323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.016356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.016744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.016774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.017107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.017138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.017511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.017542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.017883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.017913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.018260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.018291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.018621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.018650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.019002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.019032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.019428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.019459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.019796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.019825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.020047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.020080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.020418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.020450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.020798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.020829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.021183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.021214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.021558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.021589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.021936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.021965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.022295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.022326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.022663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.022698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.023037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.023066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.023396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.819 [2024-11-19 18:29:17.023427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.819 qpair failed and we were unable to recover it. 00:30:15.819 [2024-11-19 18:29:17.023769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.023801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.024129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.024167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.024508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.024538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.024863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.024892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.025226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.025258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.025593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.025622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.025963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.025993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.026285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.026315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.026654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.026682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.027023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.027053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.027405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.027437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.027777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.027808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.028146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.028183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.028513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.028542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.028899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.028928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.029182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.029212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.029553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.029583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.029925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.029954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.030325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.030357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.030695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.030724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.031066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.031095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.031441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.031472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.031802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.031831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.032179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.032210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.032531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.032561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.032900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.032928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.033290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.033321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.033669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.033698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.820 [2024-11-19 18:29:17.034033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.820 [2024-11-19 18:29:17.034062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.820 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.034418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.034448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.034798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.034828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.035176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.035207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.035555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.035584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.035933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.035962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.036314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.036345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.036684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.036713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.037058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.037087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.037437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.037474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.037807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.037836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.038184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.038215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.038552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.038582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.038914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.038943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.039287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.039317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.039670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.039700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.040040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.040069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.040406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.040436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.040775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.040804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.041169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.041200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.041548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.041577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.041923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.041952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.042194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.042228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.042561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.042591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.042913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.042943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.043284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.043315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.043653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.043682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.821 qpair failed and we were unable to recover it. 00:30:15.821 [2024-11-19 18:29:17.044025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.821 [2024-11-19 18:29:17.044054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.044393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.044424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.044767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.044797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.045181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.045211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.045414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.045446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.045823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.045853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.046185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.046217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.046550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.046581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.046911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.046943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.047276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.047308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.047655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.047684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.048026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.048055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.048394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.048425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.048652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.048684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.049076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.049107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.049461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.049493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.049829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.049859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.050183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.050214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.050523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.050552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.050930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.050959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.051291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.051322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.051658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.051689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.052036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.052072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.052405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.052434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.052779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.052808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.053148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.053186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.053415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.053447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.053779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.053808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.054144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.054183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.054529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.054559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.822 [2024-11-19 18:29:17.054900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.822 [2024-11-19 18:29:17.054930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.822 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.055264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.055294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.055642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.055672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.055987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.056017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.056320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.056350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.056681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.056711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.057052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.057083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.057495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.057525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.057854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.057884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.058226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.058257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.058603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.058632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.058974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.059003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.059357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.059389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.059612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.059644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.060000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.060030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.060387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.060419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.060745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.060776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.061113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.061143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.061494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.061525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.061865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.061894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.062226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.062257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.062616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.062645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.062991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.063020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.063326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.063358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.063709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.063739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.064086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.064115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.064477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.064507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.064882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.064913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.065223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.823 [2024-11-19 18:29:17.065253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.823 qpair failed and we were unable to recover it. 00:30:15.823 [2024-11-19 18:29:17.065598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.065628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.065968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.065998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.066248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.066277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.066699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.066734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.067062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.067092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.067452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.067483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.067905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.067936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.068265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.068296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.068639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.068668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.069014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.069043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.069400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.069430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.069788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.069817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.070172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.070203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.070542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.070571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.070914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.070944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.071196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.071229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.071558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.071588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.071930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.071960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.072296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.072328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.072666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.072695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.073035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.073065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.073348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.073377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.073710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.073739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.074084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.074114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.074455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.074485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.074829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.074859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.075112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.075144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.075472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.075502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.075858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.075888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.076225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.076256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.824 [2024-11-19 18:29:17.076580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.824 [2024-11-19 18:29:17.076611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.824 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.076926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.076956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.077297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.077328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.077668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.077697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.078051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.078080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.078388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.078421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.078751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.078782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.079107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.079136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.079475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.079505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.079774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.079804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.080152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.080204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.080576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.080605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.080937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.080966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.081327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.081364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.081700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.081730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.082076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.082106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.082409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.082440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.082786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.082816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.083156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.083193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.083452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.083483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.083821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.083850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.084195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.084225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.084658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.084687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.085018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.085048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.085396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.085427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.085765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.085793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.086138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.086175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.086535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.086565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.086906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.086935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.087271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.087303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.087653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.087683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.088020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.088049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.088297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.088330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.088667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.088697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.825 [2024-11-19 18:29:17.089039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.825 [2024-11-19 18:29:17.089070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.825 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.089429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.089460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.089803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.089833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.090167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.090200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.090452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.090485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.090815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.090845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.091182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.091214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.091563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.091593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.091935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.091964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.092322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.092353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.092692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.092722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.093041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.093071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.093423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.093454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.093793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.093823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.094176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.094208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.094588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.094617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.094944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.094974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.095320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.095351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.095706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.095736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.096083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.096119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.096501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.096532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.096884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.096914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.097257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.097288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.097640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.097670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.098012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.826 [2024-11-19 18:29:17.098041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.826 qpair failed and we were unable to recover it. 00:30:15.826 [2024-11-19 18:29:17.098398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.098428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.098787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.098817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.099165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.099196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.099525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.099554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.099897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.099927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.100267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.100298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.100647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.100677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.101021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.101050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.101393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.101424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.101776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.101805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.102149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.102187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.102530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.102560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.102900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.102930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.103267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.103298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.103628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.103657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.103997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.104027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.104393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.104424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.104766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.104795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.105133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.105173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.105513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.105543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.105884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.105912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.106254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.106285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.106636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.106666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.106992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.107023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.107374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.107403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.107744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.107772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.108114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.108144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.108491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.108521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.108848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.108877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.109217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.109247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.109587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.109617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.109958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.109988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.827 [2024-11-19 18:29:17.110370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.827 [2024-11-19 18:29:17.110401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.827 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.110728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.110758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.111098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.111132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.111476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.111506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.111848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.111878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.112220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.112251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.112599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.112628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.112907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.112935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.113267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.113298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.113622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.113652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.113995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.114024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.114380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.114411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.114753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.114782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.115179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.115209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.115535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.115564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.115905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.115935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.116258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.116288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.116634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.116663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.117010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.117039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.117394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.117425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.117766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.117796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.118129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.118179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.118528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.118558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.118905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.118935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.119280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.119311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.119557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.119589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.119913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.119943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.120290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.120321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.120661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.120691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.121035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.121065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.121467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.121497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.121825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.121855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.122196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.122227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.122586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.828 [2024-11-19 18:29:17.122615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.828 qpair failed and we were unable to recover it. 00:30:15.828 [2024-11-19 18:29:17.122966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.122995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.123322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.123353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.123695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.123724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.124079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.124108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.124458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.124489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.124827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.124856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.125208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.125239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.125581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.125610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.125848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.125882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.126105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.126134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.126399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.126428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.126768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.126797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.127129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.127167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.127547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.127577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.127920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.127950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.128295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.128325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.128675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.128704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.129091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.129121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.129401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.129432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.129775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.129804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.130142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.130180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.130515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.130545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.130888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.130918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.131155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.131193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.131529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.131559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.131906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.131935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.132279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.132310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.132682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.132712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.133047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.133078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.133401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.829 [2024-11-19 18:29:17.133432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.829 qpair failed and we were unable to recover it. 00:30:15.829 [2024-11-19 18:29:17.133773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.133803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.134208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.134238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.134625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.134655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.135000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.135031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.135386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.135416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.135774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.135805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.136141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.136181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.136506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.136536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.136870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.136900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.137241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.137272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.137626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.137656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.138008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.138037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.138367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.138398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.138745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.138774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.139115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.139145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.139489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.139519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.139860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.139888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.140236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.140265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.140602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.140635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.140987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.141015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.141339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.141368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.141608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.141638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.141969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.141997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.142327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.142357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.142583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.142614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.142940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.142968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.143316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.143346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.143694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.143722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.144066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.144095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.144385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.144415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.144768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.830 [2024-11-19 18:29:17.144797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.830 qpair failed and we were unable to recover it. 00:30:15.830 [2024-11-19 18:29:17.145123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.145151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.145502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.145533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.145882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.145911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.146265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.146297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.146631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.146662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.146890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.146919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.147142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.147189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.147542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.147572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.147916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.147946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.148291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.148322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.148670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.148699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.149045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.149075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.149429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.149461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.149846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.149876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.150205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.150237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.150642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.150673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.151004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.151035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.151385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.151416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.151642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.151672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.152000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.152032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.152368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.152400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.152737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.152768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.153098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.153127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.153481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.153512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.153860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.153891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.154299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.154331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.154690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.154720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.155101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.155136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.155503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.155534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.155876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.155906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.831 [2024-11-19 18:29:17.156249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.831 [2024-11-19 18:29:17.156280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.831 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.156617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.156646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.157002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.157032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.157395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.157425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.157748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.157778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.158120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.158151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.158502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.158532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.158884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.158913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.159246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.159277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.159625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.159656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.160004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.160035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.160371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.160402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.160750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.160780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.161143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.161182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.161569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.161599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.161957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.161987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.162230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.162260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.162611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.162640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.162980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.163009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.163370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.163401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.163744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.163775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.164122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.164151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.164507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.164536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.164887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.164917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.165259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.165296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.165639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.165668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.166010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.166039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.166399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.166431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.166645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-11-19 18:29:17.166673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-11-19 18:29:17.167015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.167044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.167368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.167400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.167740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.167770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.168105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.168136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.168490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.168520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.168860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.168890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.169232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.169263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.169621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.169651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.169986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.170015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.170263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.170294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.170638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.170669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.171013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.171042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.171282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.171312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.171541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.171571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.171913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.171943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.172284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.172314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.172658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.172688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.173034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.173064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.173298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.173328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.173650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.173678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.174033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.174063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.174426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.174457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.174818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.174848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.175195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.175226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.175558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.175587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.175900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.175928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.176173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.176203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.176550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.176579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.176911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.176939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.177298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-11-19 18:29:17.177328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-11-19 18:29:17.177542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.177571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.177927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.177957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.178318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.178349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.178592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.178624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.178956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.178987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.179337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.179375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.179726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.179757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.180102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.180132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.180520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.180550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.180877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.180907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.181227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.181257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.181622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.181652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.181983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.182014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.182247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.182277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.182606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.182636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.182980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.183010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.183256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.183289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.183646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.183676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.184020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.184050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.184389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.184422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.184761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.184791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.185043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.185072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.185504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.185535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.185866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.185894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.186268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.186299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.186544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.186576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.186918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.186948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.187291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.187323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.187535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.187568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.187904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.187935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.188198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.188230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.188556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-11-19 18:29:17.188588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-11-19 18:29:17.188920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.188950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.189293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.189323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.189673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.189702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.190045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.190075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.190408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.190439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.190837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.190866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.191081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.191110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.191463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.191493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.191806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.191836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.192208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.192239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.192577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.192606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.193014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.193044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.193195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.193225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.193539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.193576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.193902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.193933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.194244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.194274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.194602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.194632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.194874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.194903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.195248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.195278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.195620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.195650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.196030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.196060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.196312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.196343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.196719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.196748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.197099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.197128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.197510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.197542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.197760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.197790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.198037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.198066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.198465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.198497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.198848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.198878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.199261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.199291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.199650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.199681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.199907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-11-19 18:29:17.199936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-11-19 18:29:17.200277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.200308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.200667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.200697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.201098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.201128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.201562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.201593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.201819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.201848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.202105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.202138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.202489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.202521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.202872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.202902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.203243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.203274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.203616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.203647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.204001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.204030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.204259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.204293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.204629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.204659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.205009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.205039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.205290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.205319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.205657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.205686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.206033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.206063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.206407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.206438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.206785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.206815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.207057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.207087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.207250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.207280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.207621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.207657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.208001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.208032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.208466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.208497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.208827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.208857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.209193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.209224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.209565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.209595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.209941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.209970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.210318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.210349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.210683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.210713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.211036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.211065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.211312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.211342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-11-19 18:29:17.211687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-11-19 18:29:17.211716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.212048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.212078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.212421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.212452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.212791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.212821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.213176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.213207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.213570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.213601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.213930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.213961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.214321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.214353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.214700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.214731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.215067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.215096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.215400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.215431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.215779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.215808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.216149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.216202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.216549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.216578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.216899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.216929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.217267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.217297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.217641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.217672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.217962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.217992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.218328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.218359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.218690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.218720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.219059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.219090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.219444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.219475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.219816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.219845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.220263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.220294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.220634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.220663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.221013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-11-19 18:29:17.221042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-11-19 18:29:17.221408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.221441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.221771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.221800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.222133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.222184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.222533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.222568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.222912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.222941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.223271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.223302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.223630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.223662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.224000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.224030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.224388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.224419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.224758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.224788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.225133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.225173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.225600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.225629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.225963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.225993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.226328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.226359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.226718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.226747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.227098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.227127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.227395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.227426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.227764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.227794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.228136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.228174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.228520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.228550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.228867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.228896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.229127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.229181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.229559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.229589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.229819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.229848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.230223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.230253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.230600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.230629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.230949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.230979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.231323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.231353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.231686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-11-19 18:29:17.231715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-11-19 18:29:17.232047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.232078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.232400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.232431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.232672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.232702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.233025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.233055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.233406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.233436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.233766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.233797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.234124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.234153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.234537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.234567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.234837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.234867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.235196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.235227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.235555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.235585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.235919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.235949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.236291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.236321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.236648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.236677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.237029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.237064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.237414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.237445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.237786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.237816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.238046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.238075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.238417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.238447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.238785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.238815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.239188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.239219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.239530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.239568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.239886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.239916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.240132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.240168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.240524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.240554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.240884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.240914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.241269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.241300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.241644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.241673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.242004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.242035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.242401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.242431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.242767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.242797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.243134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-11-19 18:29:17.243173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-11-19 18:29:17.243403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.243431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.243793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.243823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.244141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.244180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.244530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.244560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.244889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.244919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.245271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.245303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.245651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.245680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.246019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.246048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.246397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.246428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.246766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.246797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.247130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.247167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.247546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.247575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.247899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.247930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.248269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.248300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.248641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.248670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.248989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.249019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.249238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.249268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.249592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.249620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.249946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.249975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.250297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.250328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.250533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.250562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.250882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.250911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.251271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.251308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.251656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.251686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.252020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.252050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.252295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.252329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.252667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.252697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.253032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.253062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.253406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.253437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.253770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.253799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.254025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.254055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-11-19 18:29:17.254395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-11-19 18:29:17.254427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.254762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.254793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.255127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.255165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.255524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.255554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.255770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.255799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.256145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.256184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.256547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.256577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.256906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.256935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.257286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.257318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.257666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.257696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.258009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.258039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.258402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.258433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.258788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.258817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.259143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.259180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.259568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.259598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.259950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.259981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.260326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.260356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.260677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.260706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.261065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.261094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.261242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.261273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.261509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.261541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.261889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.261920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.262226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.262260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.262627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.262656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.263017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.263047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.263280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.263312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.263667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.263697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.264027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.264057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.264400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.264431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.264769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.264798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.265101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.265131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.265514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.265552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.265756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-11-19 18:29:17.265785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-11-19 18:29:17.265997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.266028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.266232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.266267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.266568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.266598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.266932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.266962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.267209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.267239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.267613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.267643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.267970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.268000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.268339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.268370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.268696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.268725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.269022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.269052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.269397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.269429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.269765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.269795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.270050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.270084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.270442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.270475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.270840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-11-19 18:29:17.270870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-11-19 18:29:17.271219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.271250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.271608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.271640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.271873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.271903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.272135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.272173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.272479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.272509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.272825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.272855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.273213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.273244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.273397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.273426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.273760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.273789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.274126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.274155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.274523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.274555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.274863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.274894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.275236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.275268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.275613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.275644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-11-19 18:29:17.276001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-11-19 18:29:17.276032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.276277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.276308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.276676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.276707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.276938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.276969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.277273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.277305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.277528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.277558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.277927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.277957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.278254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.278285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.278631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.278661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.278872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.278922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.279298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.279330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.279578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.279612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.279941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.279970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.280257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.280289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.280629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.280659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.280991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.281021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.281379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.281411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.281801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.281831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.282169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.282200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.282522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.282551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.282900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.282930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.283275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.283307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.283660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.283690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.284038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.284068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.284410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.284441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.284785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.284815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.285149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.285188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.285519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.285549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.285891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.285921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.286266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.286298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.286634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.286663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.286989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.287018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.287373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.287405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.287672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.287701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.288091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.288120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.288474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.288505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.288850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.288881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.289215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.289247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.289598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.289628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.289964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.289994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.290316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.290348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.290729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.290759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.291108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.291138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.291495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.291525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.291955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.291985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.292268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.292299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.292708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.292739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.293059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.293089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.293323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.293353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.293712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.293747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.294092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.294122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.294403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.294433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.294664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.294693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.295067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.295098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.295457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.295488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.295706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.295737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.296085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.296116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.296562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.296594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.296932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.296961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.297312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.297344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.297686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.297716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.298058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.298088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.298414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.298445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.298857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.119 [2024-11-19 18:29:17.298887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.119 qpair failed and we were unable to recover it. 00:30:16.119 [2024-11-19 18:29:17.299271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.299301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.299645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.299675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.300003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.300034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.300332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.300361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.300718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.300747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.301092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.301121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.301450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.301480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.301791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.301821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.302154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.302193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.302536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.302567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.302901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.302930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.303263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.303295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.303646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.303675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.304022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.304052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.304495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.304526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.304858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.304888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.305226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.305256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.305634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.305665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.306077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.306107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.306490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.306521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.306860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.306892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.307252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.307283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.307621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.307650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.308006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.308036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.308273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.308302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.308634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.308672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.308896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.308926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.309167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.309201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.309556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.309586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.309930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.309961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.310312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.310344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.310684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.310713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.311051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.311080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.311308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.311339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.311667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.311697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.312065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.312095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.312426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.312456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.312853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.312882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.313230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.313261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.313527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.313558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.313885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.313914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.314323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.314354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.314700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.314730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.315089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.315118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.315460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.315490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.315883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.315912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.316233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.316264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.316628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.316658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.316999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.317029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.317296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.317327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.317664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.317694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.318050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.318080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.318418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.318449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.318677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.318710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.318945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.318974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.319312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.319342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.319720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.319749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.320084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.320114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.320527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.320558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.320885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.320915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.321229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.321259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.321609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.321638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.321987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.322017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.322388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.322419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.322816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.322846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.120 [2024-11-19 18:29:17.323179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.120 [2024-11-19 18:29:17.323215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.120 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.323567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.323598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.323949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.323978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.324336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.324367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.324701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.324731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.325077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.325107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.325445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.325476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.325818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.325846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.326111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.326140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.326525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.326556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.326806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.326835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.327166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.327198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.327578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.327608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.327958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.327988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.328323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.328353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.328571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.328602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.328929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.328959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.329310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.329341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.329697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.329727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.330104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.330133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.330473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.330504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.330826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.330856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.331081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.331111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.331363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.331397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.331755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.331786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.332120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.332150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.332514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.332545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.332893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.332923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.333140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.333183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.333582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.333611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.333953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.333982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.334282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.334315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.334676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.334705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.335032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.335062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.335425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.335457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.335791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.335821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.336176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.336207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.336622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.336651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.336995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.337025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.337366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.337397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.337725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.337762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.338110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.338140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.338484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.338515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.338890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.338920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.339274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.339306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.339649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.339679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.340031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.340061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.340403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.340433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.340809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.340838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.341176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.341207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.341564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.341593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.341834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.341863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.342225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.342255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.342471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.342503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.342866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.342897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.343149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.343189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.343516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.343546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.343890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.343919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.344291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.344322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.121 [2024-11-19 18:29:17.344588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.121 [2024-11-19 18:29:17.344616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.121 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.344968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.344997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.345200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.345234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.345498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.345528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.345852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.345883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.346296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.346327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.346539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.346569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.346886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.346915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.347269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.347307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.347543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.347571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.347924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.347953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.348303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.348334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.348672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.348701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.349038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.349068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.349397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.349428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.349795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.349825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.350174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.350205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.350534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.350563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.350919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.350949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.351286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.351318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.351755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.351784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.352118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.352149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.352507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.352538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.352952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.352983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.353236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.353266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.353454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.353483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.353813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.353843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.354253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.354283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.354643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.354673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.355030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.355061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.355311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.355344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.355594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.355627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.355942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.355972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.356331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.356363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.356620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.356649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.356972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.357003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.357373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.357403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.357732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.357762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.358189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.358221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.358543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.358574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.358896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.358926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.359271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.359302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.359652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.359681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.360028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.360058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.360404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.360435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.360789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.360820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.361118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.361149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.361533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.361565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.361838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.361874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.362228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.362260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.362608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.362638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.362967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.362998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.363329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.363360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.363719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.363749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.364094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.364125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.364474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.364505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.364750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.364779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.365036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.365067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.365422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.365453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.365801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.365831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.366180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.366211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.366552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.366581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.366913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.366943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.367288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.367319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.367655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.367684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.122 [2024-11-19 18:29:17.368031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.122 [2024-11-19 18:29:17.368061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.122 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.368432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.368463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.368717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.368746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.369120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.369149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.369504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.369535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.369875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.369905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.370117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.370150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.370511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.370542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.370893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.370923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.371205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.371235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.371630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.371661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.372066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.372096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.372315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.372346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.372710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.372740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.373105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.373135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.373478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.373509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.373852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.373882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.374229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.374259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.374606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.374636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.374978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.375008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.375386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.375417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.375754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.375786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.376122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.376151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.376516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.376553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.376908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.376939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.377309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.377340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.377657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.377688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.378013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.378042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.378254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.378287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.378625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.378655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.378976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.379006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.379238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.379270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.379611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.379640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.379877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.379909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.380186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.380218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.380577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.380607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.380902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.380931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.381273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.381305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.381667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.381697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.382058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.382087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.382434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.382466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.382807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.382837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.383079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.383109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.383488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.383519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.383874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.383905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.384236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.384267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.384626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.384656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.385004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.385034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.385386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.385419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.385763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.385794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.386033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.386063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.386436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.386467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.386814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.386844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.387219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.387250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.387590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.387619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.387948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.387978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.388330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.388362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.388716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.388746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.389100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.389130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.389517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.389547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.389888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.389918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.390280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.390311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.390664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.390698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.123 qpair failed and we were unable to recover it. 00:30:16.123 [2024-11-19 18:29:17.391088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.123 [2024-11-19 18:29:17.391129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.391521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.391551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.391900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.391929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.392278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.392309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.392659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.392691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.392959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.392988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.393362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.393394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.393635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.393668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.394080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.394110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.394463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.394497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.394827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.394857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.395208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.395239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.395508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.395539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.395882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.395912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.396290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.396322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.396660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.396689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.397019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.397050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.397307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.397338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.397653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.397681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.398035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.398065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.398480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.398511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.398860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.398890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.399268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.399299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.399640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.399670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.400079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.400108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.400456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.400488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.400808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.400839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.401086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.401120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.401517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.401549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.401890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.401921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.402156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.402195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.402516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.402547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.402884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.402914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.403247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.403279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.403645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.403673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.404026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.404055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.404396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.404428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.404801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.404830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.405156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.405207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.405575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.405604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.405850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.405885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.406207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.406240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.406481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.406510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.406873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.406904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.407319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.407351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.407580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.407608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.407928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.407958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.408342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.408374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.408718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.408748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.409088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.409117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.409354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.409384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.409708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.409737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.410078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.410107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.410501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.410532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.410860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.410890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.411231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.411262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.411618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.411647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.412004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.412033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.412397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.124 [2024-11-19 18:29:17.412429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.124 qpair failed and we were unable to recover it. 00:30:16.124 [2024-11-19 18:29:17.412774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.412804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.413146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.413186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.413534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.413563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.413890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.413921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.414184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.414215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.414532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.414563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.414920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.414950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.415175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.415206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.415616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.415646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.415987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.416017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.416484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.416515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.416860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.416891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.417235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.417268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.417522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.417552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.417905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.417934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.418289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.418322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.418693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2177633 Killed "${NVMF_APP[@]}" "$@" 00:30:16.125 [2024-11-19 18:29:17.418724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.419059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.419089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.419444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.419475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:16.125 [2024-11-19 18:29:17.419802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.419834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:16.125 [2024-11-19 18:29:17.420177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.420208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.125 [2024-11-19 18:29:17.420631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.420661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.125 [2024-11-19 18:29:17.420835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.420864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.421228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.421260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.421638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.421669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.422032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.422065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.422472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.422504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.422859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.422889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.423294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.423326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.423688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.423718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.424054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.424084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.424483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.424514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.424850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.424881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.425305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.425337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.425695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.425725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.426083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.426113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.426495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.426526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.426877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.426907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.427324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.427356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.427691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.427721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.428076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.428107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.428490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2178501 00:30:16.125 [2024-11-19 18:29:17.428523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2178501 00:30:16.125 [2024-11-19 18:29:17.428853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.428884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.429140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.429183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b9 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:16.125 0 with addr=10.0.0.2, port=4420 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2178501 ']' 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.125 [2024-11-19 18:29:17.429575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.429606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.125 [2024-11-19 18:29:17.429950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.125 [2024-11-19 18:29:17.429980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.125 [2024-11-19 18:29:17.430315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.430348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 18:29:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.125 [2024-11-19 18:29:17.430708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.430740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.431097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.431128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.431468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.431500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.431855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.431886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.432219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.432250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.432629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.432659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.433007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.433038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.433460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.433503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.433845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.433878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.434221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.434253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.434520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.125 [2024-11-19 18:29:17.434554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.125 qpair failed and we were unable to recover it. 00:30:16.125 [2024-11-19 18:29:17.434794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.434830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.435170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.435204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.435560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.435591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.435922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.435953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.436203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.436236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.436600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.436632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.436974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.437004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.437277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.437309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.437662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.437692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.437921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.437959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.438193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.438226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.438623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.438656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.438994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.439026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.439365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.439398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.439625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.439656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.439989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.440019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.440245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.440276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.440544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.440575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.440900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.440930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.441289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.441321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.441671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.441700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.441943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.441972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.442227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.442258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.442629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.442660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.442989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.443018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.443457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.443489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.443854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.443885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.444097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.444130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.444591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.444623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.444852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.444882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.445109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.445139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.445501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.445534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.445886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.445916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.446185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.446217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.446654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.446684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.446918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.446948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.447288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.447319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.447654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.447683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.448046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.448077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.448458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.448489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.448839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.448869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.449218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.449251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.449616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.449648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.449984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.450018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.450260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.450291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.450635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.450666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.451008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.451038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.451400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.451431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.451784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.451814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.452233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.452269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.452604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.452633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.453034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.453065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.453437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.453468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.453880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.453909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.454197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.454236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.454486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.454517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.454731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.454764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.455053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.455083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.455458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.455492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.455860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.126 [2024-11-19 18:29:17.455891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.126 qpair failed and we were unable to recover it. 00:30:16.126 [2024-11-19 18:29:17.456191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.456223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.456567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.456598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.456861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.456889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.457239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.457271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.457617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.457648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.457993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.458022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.458195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.458226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.458485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.458514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.458876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.458906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.459272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.459302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.459646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.459675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.460071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.460100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.460498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.460530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.460862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.460891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.461241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.461271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.461496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.461529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.461901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.461931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.462290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.462322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.462678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.462707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.463067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.463099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.463446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.463477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.463821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.463851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.464211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.464241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.464530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.464559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.464896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.464927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.465262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.465293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.465638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.465667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.466001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.466031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.466284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.466314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.466676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.466713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.467052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.467081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.467448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.467480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.467831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.467861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.468099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.468128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.468359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.468391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.468758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.468789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.468928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.468958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.469200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.469232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.469581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.469610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.469851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.469881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.470060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.470090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.470437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.470468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.470807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.470838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.471170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.471201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.471558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.471587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.471933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.471962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.472257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.472287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.472631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.472660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.472865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.472898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.473240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.473271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.473613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.473643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.473878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.473908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.474279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.474309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.474652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.474683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.474921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.474951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.475207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.475237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.475602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.475633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.476002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.476032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.476506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.476537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.476873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.476903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.477246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.477276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.477624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.477654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.477983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.478013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.478345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.127 [2024-11-19 18:29:17.478376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.127 qpair failed and we were unable to recover it. 00:30:16.127 [2024-11-19 18:29:17.478598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.478631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.478969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.478999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.479336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.479368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.479533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.479562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.479774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.479808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.480169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.480207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.480553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.480583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.480975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.481005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.481338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.481370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.481777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.481808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.482171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.482203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.482542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.482572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.482826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.482856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.483203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.483234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.483616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.483646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.483996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.484028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.484270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.484301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.484429] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:30:16.128 [2024-11-19 18:29:17.484486] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.128 [2024-11-19 18:29:17.484657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.484694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.484922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.484951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.485311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.485341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.485671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.485700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.486044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.486075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.486422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.486455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.486841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.486871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.487201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.487233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.487564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.487596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.487930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.487960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.488307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.488339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.488582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.488617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.488969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.489000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.489247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.489279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.489671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.489702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.489951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.489981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.490310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.490342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.490682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.490713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.491061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.491093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.491465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.491497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.491850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.491881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.492218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.492249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.492564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.492594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.492942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.492972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.493306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.493337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.493677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.493707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.494066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.494096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.494383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.494418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.494764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.494795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.495153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.495199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.495583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.495613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.495851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.495881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.496202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.496234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.496614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.496645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.497013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.497044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.497274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.497306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.497667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.497698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.498035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.498065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.498437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.498469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.498814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.498846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.499203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.499235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.499583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.499614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.499956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.499986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.500339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.500371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.500712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.500743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.501081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.501113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.501463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.501495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.501822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.501852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.502213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.502244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.502591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.128 [2024-11-19 18:29:17.502622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.128 qpair failed and we were unable to recover it. 00:30:16.128 [2024-11-19 18:29:17.503024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.503053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.503308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.503338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.503470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.503504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.503860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.503890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.504192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.504224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.504584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.504614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.505048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.505079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.505454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.505484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.505839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.505869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.506234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.506266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.506631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.506660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.506907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.506936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.507295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.507327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.507687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.507717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.508060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.508091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.508394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.508425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.508785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.508815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.509035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.509072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.509341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.509373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.509801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.509831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.510052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.510081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.510482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.510514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.510866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.510896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.511328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.511359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.511703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.511733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.512091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.512121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.512453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.512484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.512832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.512864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.513234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.513266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.513538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.513567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.513796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.513830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.514206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.514238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.514520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.514549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.514879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.514908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.515189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.515219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.515557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.515587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.515941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.515971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.516306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.516338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.516649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.516679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.517030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.517059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.517293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.517324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.517665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.517695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.518044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.518073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.518439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.518471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.518799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.518829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.519083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.519112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.519316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.519346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.519701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.519732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.520212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.520245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.520554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.520584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.520920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.520951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.521114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.521144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.521415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.521445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.521848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.521878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.522192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.522224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.522592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.522621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.522982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.523011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.523351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.523390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.523619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.523651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.523895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.523925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.524268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.524300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.524642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.524672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.525011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.525041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.525396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.129 [2024-11-19 18:29:17.525427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.129 qpair failed and we were unable to recover it. 00:30:16.129 [2024-11-19 18:29:17.525775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.525805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.526046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.526075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.526483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.526515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.526875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.526904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.527199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.527230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.527589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.527618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.527961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.527991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.528344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.528375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.528714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.528745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.529020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.529051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.529328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.529358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.529593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.529625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.529812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.529842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.530238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.530269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.530500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.530529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.530867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.530897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.531180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.531211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.531623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.531654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.532027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.532057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.532299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.532329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.532671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.532701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.533047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.533077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.533432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.533463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.533798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.533829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.534202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.534234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.534560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.534589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.534818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.534848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.535106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.535136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.535387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.535418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.535633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.535663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.536017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.536047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.536423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.536455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.536843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.536873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.537208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.537245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.537571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.537603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.537994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.538024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.538269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.538301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.538651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.538681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.539035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.539065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.539467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.539498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.539846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.539876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.540242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.540273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.540627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.540656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.541061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.541090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.541333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.541364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.541697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.541727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.542087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.542118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.542512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.542546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.542786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.542815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.543207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.543238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.543580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.543611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.543953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.543983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.544427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.544457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.544688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.544718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.545059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.545088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.545469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.545500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.545835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.545866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.546213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.546245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.546598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.546627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.546967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.546996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.547352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.547386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.547712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.547743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.548101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.548130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.548502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.548534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.548889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.548918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.549176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.549206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.549417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.549447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.549791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.549821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.550165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-11-19 18:29:17.550195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-11-19 18:29:17.550594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.550624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.550979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.551010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.551356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.551388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.551753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.551784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.552127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.552171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.552526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.552556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.552800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.552829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.553261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.553292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.553522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.553551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.553905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.553934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.554304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.554335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.554574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.554603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.554964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.554993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.555269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.555300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.555673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.555703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.555959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.555987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.556342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.556373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.556700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.556731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.556970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.556999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.557372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.557404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.557740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.557771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.558195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.558225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.558580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.558610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.558963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.558993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.559338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.559370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.559763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.559793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.560146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.560188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.560528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.560557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.560897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.560927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.561170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.561199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.561547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.561577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.561919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.561950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.562321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.562352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.562716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.562746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.563106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.563137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.563667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.563698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.563936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.563967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.564218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.564249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.564600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.564629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.564971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.565002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.565353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.565383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.565730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.565759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.566015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.566045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.566287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.566319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.566556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.566591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.566971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.567002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.567346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.567378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.567606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.567636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.567971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.568001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.568341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.568372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.568745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.568775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.569129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.569167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.569662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.569693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.569938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.569967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.570339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.570370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.570753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.570784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.571137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.571189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.571459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.571488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-11-19 18:29:17.571815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-11-19 18:29:17.571845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.572220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.572255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.572656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.572685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.573045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.573076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.573390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.573421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.573779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.573809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.574200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.574231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.574613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.574644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.574987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.575017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.575387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.575418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.405 [2024-11-19 18:29:17.575648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.405 [2024-11-19 18:29:17.575677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.405 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.576050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.576080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.576431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.576462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.576876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.576907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.577171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.577203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.577582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.577614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.577852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.577884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.577949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.406 [2024-11-19 18:29:17.578147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.578188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.578553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.578583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.578950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.578981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.579338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.579370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.579724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.579754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.580133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.580173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.580537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.580567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.580873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.580904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.581133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.581171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.581545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.581577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.581826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.581859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.582227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.582260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.582589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.582620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.582985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.583016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.583264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.583296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.583647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.583679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.583927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.583957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.584380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.584412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.584739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.584771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.585000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.585029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.585439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.585471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.585703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.585736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.586125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.586171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.586470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.586504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.586724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.586753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.587113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.587145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.587595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.587626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.588010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.588040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.588437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.588469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.588817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.588847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.406 qpair failed and we were unable to recover it. 00:30:16.406 [2024-11-19 18:29:17.589178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.406 [2024-11-19 18:29:17.589210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.589531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.589565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.589908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.589940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.590340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.590371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.590623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.590654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.591051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.591081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.591439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.591472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.591734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.591763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.592122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.592154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.592536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.592566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.592924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.592954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.593252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.593283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.593678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.593708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.593945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.593977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.594329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.594361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.594735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.594766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.595117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.595148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.595530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.595563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.595803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.595833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.596074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.596104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.596502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.596880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.596910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.597044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.597076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.597447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.597480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.597726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.597756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.598103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.598134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.598286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.598317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.598589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.598618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.598983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.599014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.599356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.599387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.599600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.599629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.600007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.600038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.600289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.600327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.600578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.600609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.600969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.600999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.601247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.601277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.601622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.601652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.602013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.407 [2024-11-19 18:29:17.602044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.407 qpair failed and we were unable to recover it. 00:30:16.407 [2024-11-19 18:29:17.602444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.602476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.602628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.602658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.602854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.602883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.603248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.603279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.603647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.603680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.604051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.604082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.604474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.604508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.604850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.604880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.605245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.605279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.605608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.605639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.605980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.606020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.606293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.606325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.606671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.606704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.607026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.607060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.607307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.607341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.607730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.607763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.608135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.608174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.608439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.608469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.608857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.608887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.609247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.609280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.609631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.609661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.610085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.610116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.610432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.610465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.610762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.610792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.611157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.611207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.611593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.611624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.611964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.611996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.612398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.612430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.612783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.612815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.613008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.408 [2024-11-19 18:29:17.613040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.408 [2024-11-19 18:29:17.613046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.408 [2024-11-19 18:29:17.613051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.408 [2024-11-19 18:29:17.613056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.408 [2024-11-19 18:29:17.613166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.613197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.613560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.613590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.613939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.613970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.614327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.614372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.614466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:16.408 [2024-11-19 18:29:17.614624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.614659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.408 qpair failed and we were unable to recover it. 00:30:16.408 [2024-11-19 18:29:17.614602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:16.408 [2024-11-19 18:29:17.614753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:16.408 [2024-11-19 18:29:17.614755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:16.408 [2024-11-19 18:29:17.615020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.408 [2024-11-19 18:29:17.615050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.615424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.615457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.615811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.615843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.616200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.616231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.616485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.616514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.616864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.616893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.617231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.617262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.617612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.617641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.617883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.617913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.618239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.618272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.618598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.618628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.618852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.618882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.619237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.619267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.619611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.619641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.619988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.620019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.620369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.620400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.620742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.620774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.621132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.621171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.621544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.621575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.621907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.621936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.622285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.622318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.622662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.622692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.623036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.623067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.623447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.623479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.623825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.623856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.624217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.624248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.624575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.624606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.624954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.624985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.625329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.625361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.625747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.625777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.626138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.626178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.626540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.626571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.626910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.626939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.627278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.627310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.627694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.627724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.628087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.628118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.628398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.628429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.628669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.409 [2024-11-19 18:29:17.628704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.409 qpair failed and we were unable to recover it. 00:30:16.409 [2024-11-19 18:29:17.629095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.629126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.629511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.629543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.629777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.629809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.630145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.630183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.630532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.630563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.630916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.630947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.631288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.631320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.631664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.631694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.632027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.632059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.632403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.632433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.632683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.632714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.633044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.633075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.633442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.633475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.633693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.633725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.634050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.634080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.634412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.634444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.634785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.634816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.635200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.635234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.635591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.635622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.635841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.635872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.636238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.636269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.636631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.636661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.636999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.637031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.637349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.637381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.637718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.637749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.637961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.637993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.638239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.638271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.638634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.638664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.639029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.639060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.639430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.639462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.639795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.639826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.640178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.410 [2024-11-19 18:29:17.640210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.410 qpair failed and we were unable to recover it. 00:30:16.410 [2024-11-19 18:29:17.640557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.640586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.640918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.640947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.641197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.641227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.641575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.641605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.641954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.641985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.642336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.642369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.642733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.642767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.642972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.643007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.643326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.643357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.643660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.643690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.644021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.644052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.644390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.644422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.644784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.644814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.645149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.645192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.645531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.645560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.645921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.645951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.646179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.646210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.646561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.646592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.646950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.646980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.647201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.647231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.647553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.647583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.647915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.647945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.648286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.648317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.648618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.648648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.648857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.648887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.649242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.649273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.649629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.649659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.650009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.650040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.650248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.650280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.650593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.650623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.650950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.650981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.651336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.651367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.651738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.651769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.652113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.652143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.652520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.652552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.652917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.652948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.653255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.653286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.653621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.411 [2024-11-19 18:29:17.653652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.411 qpair failed and we were unable to recover it. 00:30:16.411 [2024-11-19 18:29:17.653877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.653908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.654275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.654307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.654658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.654689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.655029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.655060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.655166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.655195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.655525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.655555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.655784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.655813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.656028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.656058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.656335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.656366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.656698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.656735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.657085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.657115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.657509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.657541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.657884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.657915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.658275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.658307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.658663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.658692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.659015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.659044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.659414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.659446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.659779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.659809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.660226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.660257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.660609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.660639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.660861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.660890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.661249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.661280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.661648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.661677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.662018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.662048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.662394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.662425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.662633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.662662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.662866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.662896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.663205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.663237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.663599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.663628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.663977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.664007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.664335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.664365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.664577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.664606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.664939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.664970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.412 [2024-11-19 18:29:17.665320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.412 [2024-11-19 18:29:17.665352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.412 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.665568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.665597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.665935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.665965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.666314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.666347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.666714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.666743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.667073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.667104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.667460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.667491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.667843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.667873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.668236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.668267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.668654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.668684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.669027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.669058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.669412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.669445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.669799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.669828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.670178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.670209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.670547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.670576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.670927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.670959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.671282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.671319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.671648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.671677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.672023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.672053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.672271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.672301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.672524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.672553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.672887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.672918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.673150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.673188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.673537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.673567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.673917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.673948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.674296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.674327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.674692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.674721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.675069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.675100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.675197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.675227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.675502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.675532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.675870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.675901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.676265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.676297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.676647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.676676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.676896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.676925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.677175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.413 [2024-11-19 18:29:17.677206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.413 qpair failed and we were unable to recover it. 00:30:16.413 [2024-11-19 18:29:17.677587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.677617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.677824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.677853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.678102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.678132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.678420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.678449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.678789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.678819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.679168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.679200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.679586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.679616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.679978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.680008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.680382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.680414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.680764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.680795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.681154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.681194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.681433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.681464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.681830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.681860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.682216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.682247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.682613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.682645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.682882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.682912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.683283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.683315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.683662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.683692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.684033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.684062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.684266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.684297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.684658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.684688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.685039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.685080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.685415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.685446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.685799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.685830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.686187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.686219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.686442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.686472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.686821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.686852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.687076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.687106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.687467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.687499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.687845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.687876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.688226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.688257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.688645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.688676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.689019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.689049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.689281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.689313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-11-19 18:29:17.689549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-11-19 18:29:17.689578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.689935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.689967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.690200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.690234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.690575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.690603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.690958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.690987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.691322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.691353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.691557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.691586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.691928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.691958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.692294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.692326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.692692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.692722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.693084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.693115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.693489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.693520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.693730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.693762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.694065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.694095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.694474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.694505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.694859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.694890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.695247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.695279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.695654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.695683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.696036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.696065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.696430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.696463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.696800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.696829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.697025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.697053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.697401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.697431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.697786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.697815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.698024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.698052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.698368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.698399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.698744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.698775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.699118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.699154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.699382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-11-19 18:29:17.699417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-11-19 18:29:17.699631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.699660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.700008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.700038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.700276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.700307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.700638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.700668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.700995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.701025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.701398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.701429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.701748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.701778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.701984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.702013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.702326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.702358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.702591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.702622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.702972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.703003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.703361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.703391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.703742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.703773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.703969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.703998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.704327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.704359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.704719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.704749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.704968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.704997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.705328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.705359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.705717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.705747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.705839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.705867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.706303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.706413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.706835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.706873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.707314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.707350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.707653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.707682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.708034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.708065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.708409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.708445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.708786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.708818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.709171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.709202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.709455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.709485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.709682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.709712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.710076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.710106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.710473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.710504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.710873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.710905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.711223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.711258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.711614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.711647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-11-19 18:29:17.712006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-11-19 18:29:17.712037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.712374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.712405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.712759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.712789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.713138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.713180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.713550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.713581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.713675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.713704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.714034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.714064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.714435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.714466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.714705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.714740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.715067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.715098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.715310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.715340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.715697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.715729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.716052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.716082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.716451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.716484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.716834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.716865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.717079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.717110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.717493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.717526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.717870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.717908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.718240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.718271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.718479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.718508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.718721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.718753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.719088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.719118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.719490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.719522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.719869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.719900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.720270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.720301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.720646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.720677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.721006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.721038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.721407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.721437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.721767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.721797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.722174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.722207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.722538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.722568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.722798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.722832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.723176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.723209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.723547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.723579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.723798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.723827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.724181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.724212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.724562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.724592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.724946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-11-19 18:29:17.724979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-11-19 18:29:17.725357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.725389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.725715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.725746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.726075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.726105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.726465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.726497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.726842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.726871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.727226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.727257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.727658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.727695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.728025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.728055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.728409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.728439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.728770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.728801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.729023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.729053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.729354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.729386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.729713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.729745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.730090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.730121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.730502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.730535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.730886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.730916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.731143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.731183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.731583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.731613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.731944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.731975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.732326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.732357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.732702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.732734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.733077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.733106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.733457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.733488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.733827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.733858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.734194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.734225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.734456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.734487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.734827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.734857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.735185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.735214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.735558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.735589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.735930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.735961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.736308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.736338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.736550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.736580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.736819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.736849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.737202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.737232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.737602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.737634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.737978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.738008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.738384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.738416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.738750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-11-19 18:29:17.738781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-11-19 18:29:17.739136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.739175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.739417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.739447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.739831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.739860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.740052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.740083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.740444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.740474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.740821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.740851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.741057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.741087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.741432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.741463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.741869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.741899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.742017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.742050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.742279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.742313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.742669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.742700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.743027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.743058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.743405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.743437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.743774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.743804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.744133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.744172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.744518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.744549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.744916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.744945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.745287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.745318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.745666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.745697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.745916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.745947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.746280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.746313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.746531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.746561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.746894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.746925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.747267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.747297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.747490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.747520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.747756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.747789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.748128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.748167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.748527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.748558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.748912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.748941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.749297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.749328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.749671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.749702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.750050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.750079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.750443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.750475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.750814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.750844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.751193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.751223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-11-19 18:29:17.751441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-11-19 18:29:17.751477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.751830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.751861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.752083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.752113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.752495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.752526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.752889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.752919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.753223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.753255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.753598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.753628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.753973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.754003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.754370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.754402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.754726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.754756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.755099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.755130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.755457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.755487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.755711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.755740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.756081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.756111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.756483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.756515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.756853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.756883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.757132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.757175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.757485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.757516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.757865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.757894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.758254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.758287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.758650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.758681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.759031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.759061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.759371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.759402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.759491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.759520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.759867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.759896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.760106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.760135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.760476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.760507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.760840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.760877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.761078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.761107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.761455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.761487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.761819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.761849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.762058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.762087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.762439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.762471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.762821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.762851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.763196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.763228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-11-19 18:29:17.763603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-11-19 18:29:17.763633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.763992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.764024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.764398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.764428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.764790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.765025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.765054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.765367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.765398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.765605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.765634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.765940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.765969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.766306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.766338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.766710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.766740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.766941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.766971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.767317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.767348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.767707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.767737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.767953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.767982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.768327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.768359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.768694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.768725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.769066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.769096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.769452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.769483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.769813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.769843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.770197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.770234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.770566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.770596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.770949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.770979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.771221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.771254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.771581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.771612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.771946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.771977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.772332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.772363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.772598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.772627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.772990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.773020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.773373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.773404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.773755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.773785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.774139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.774180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.774481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.774510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.774902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.774934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.775132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.775171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.775485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.775515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.775868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.775898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.776231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.776263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.776464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.776494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-11-19 18:29:17.776832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-11-19 18:29:17.776861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.777071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.777105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.777460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.777491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.777712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.777742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.778087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.778117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.778471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.778501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.778714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.778743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.779087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.779117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.779447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.779479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.779819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.779850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.780194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.780226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.780580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.780609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.780695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.780723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.781052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.781082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.781442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.781472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.781817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.781847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.782194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.782223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.782423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.782451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.782813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.782843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.783068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.783097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.783318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.783348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.783693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.783723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.783930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.783960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.784301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.784333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.784679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.784709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.785072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.785101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.785364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.785394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.785728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.785759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.786104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.786133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.786511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.786543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.786888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.786918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.787260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.787293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.787652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.787681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.788022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.788052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.788401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.788432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.788651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.788680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.789016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.789047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.789388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.789421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-11-19 18:29:17.789748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-11-19 18:29:17.789779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.789888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.789923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.790288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.790319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.790663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.790693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.791022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.791052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.791401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.791431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.791802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.791831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.792172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.792204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.792613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.792643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.792980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.793012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.793243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.793277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.793639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.793675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.794013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.794044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.794246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.794276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.794520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.794552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.794752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.794783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.795092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.795126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.795486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.795517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.795871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.795900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.796271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.796301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.796515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.796544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.796903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.796933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.797152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.797193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.797489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.797518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.797845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.797874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.798105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.798134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.798486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.798516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.798843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.798872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.799208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.799240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.799583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.799612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.799951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.799981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.800344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.800375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.800469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.800498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.800727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.800757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.801102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.801132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.801488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.801519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.801861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.801891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.802242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.802626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-11-19 18:29:17.802662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-11-19 18:29:17.802998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.803027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.803236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.803270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.803484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.803513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.803839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.803870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.804213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.804244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.804616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.804645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.804988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.805018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.805241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.805271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.805486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.805517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.805875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.805903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.806244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.806276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.806614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.806646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.807012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.807041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.807397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.807428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.807788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.807818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.808175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.808207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.808502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.808534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.808864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.808894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.809242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.809274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.809625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.809653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.810007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.810037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.810403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.810434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.810763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.810794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.811122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.811153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.811392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.811421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.811656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.811685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.811978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.812007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.812328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.812361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.812705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.812735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.813094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.813123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.813364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.813397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.813611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.813642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.813981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.814010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-11-19 18:29:17.814206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-11-19 18:29:17.814236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.814608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.814637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.814987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.815017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.815335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.815367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.815705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.815734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.816084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.816114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.816459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.816492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.816841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.816871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.817203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.817235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.817571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.817603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.817804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.817833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.818042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.818071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.818278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.818309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.818452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.818480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.818816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.818845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.819193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.819224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.819445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.819473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.819826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.819856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.820187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.820218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.820544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.820573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.820960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.820991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.821326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.821358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.821712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.821741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.822083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.822113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.822482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.822513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.822857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.822886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.823230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.823259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.823600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.823630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.823961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.823990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.824326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.824357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.824710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.824740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.825083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.825112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.825477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.825510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.825849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.825879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.826096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.826131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.826516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.826548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.826892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.826922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.827225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.827256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.827492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-11-19 18:29:17.827522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-11-19 18:29:17.827730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.827762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.828105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.828134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.828502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.828533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.828865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.828895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.829123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.829152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.829457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.829487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.829839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.829870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.830212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.830243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.830586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.830615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.830937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.830967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.831298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.831329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.831670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.831700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.831925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.831955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.832309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.832339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.832677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.832708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.832914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.832943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.833276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.833308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.833654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.833684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.833900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.833930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.834237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.834266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.834630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.834659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.834853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.834881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.835204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.835248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.835555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.835585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.835815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.835844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.836176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.836207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.836534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.836564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.836765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.836793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.837139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.837190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.837388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.837418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.837763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.837793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.838039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.838071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.838397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.838428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.838644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.838672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.839033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.839063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.839293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.839324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.839684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.839714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-11-19 18:29:17.840052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-11-19 18:29:17.840082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.840434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.840465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.840679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.840708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.840980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.841008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.841323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.841354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.841738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.841768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.842116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.842146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.842520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.842551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.842876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.842904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.843138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.843191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.843517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.843546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.843887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.843918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.844252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.844289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.844546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.844578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.844904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.844934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.845278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.845309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.845655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.845685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.845891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.845920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.846244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.846275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.846503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.846532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.846870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.846899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.847231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.847262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.847610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.847640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.847969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.847999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.848327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.848358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.848717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.848747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.849078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.849108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.849396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.849425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.849633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.849663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.849936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.849967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.850219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.850250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.850630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.850661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.850988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.851017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.851392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.851423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.851629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.851657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.851986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.852015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.852396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.852428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.852635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.852664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.852972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-11-19 18:29:17.853002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-11-19 18:29:17.853233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.853263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.853634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.853664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.854008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.854038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.854401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.854432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.854759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.854788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.854992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.855022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.855409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.855441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.855687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.855720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.856045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.856075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.856279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.856310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.856686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.856716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.857044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.857072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.857409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.857439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.857766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.857796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.858140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.858177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.858515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.858545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.858759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.858787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.859127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.859183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.859565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.859596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.859804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.859834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.860185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-11-19 18:29:17.860216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-11-19 18:29:17.860542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.860570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.860921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.860952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.861300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.861330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.861671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.861702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.862042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.862072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.862412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.862442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.862785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.862814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.863135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.863174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.863521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.863551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.863875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.863905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.864247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.864278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.864504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.864533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.864954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.864991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.865425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.865459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.865794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.865825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.866042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.866071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.866403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.866434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.866551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.866584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.866966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.866997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.867346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.867378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.867584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.867620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.867965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.867995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.868178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.868209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.868549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.868579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.868916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.868945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.869307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.869338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.869564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-11-19 18:29:17.869594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-11-19 18:29:17.869930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.869962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.870305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.870336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.870676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.870707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.870911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.870941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.871281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.871312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.871653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.871682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.872020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.872050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.872396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.872427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.872808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.872837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.873179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.873211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.873562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.873591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.873933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.873963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.874317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.874350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.874690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.874720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.875069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.875099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.875434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.875467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.875779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.875809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.876019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.876048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.876347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.876378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.876705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.876734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.876949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.876983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.877204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.877235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.877568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.877597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.877942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.877971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.878315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.878346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.878691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.878721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.878924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.878953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.879148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.879187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.879561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.879590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.879939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.879969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.880311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.880342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.880679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.880709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.881050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.881080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.881432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.881463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.881794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.881822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.882173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.882203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.882493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.882523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.882880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-11-19 18:29:17.882910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-11-19 18:29:17.883130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.883181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.883525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.883555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.883901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.883930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.884134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.884173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.884471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.884501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.884832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.884861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.885207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.885238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.885564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.885593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.885796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.885825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.886146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.886184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.886403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.886434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.886645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.886674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.886942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.886971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.887173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.887205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.887487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.887517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.887846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.887876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.888219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.888250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.888557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.888585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.888794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.888823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.889026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.889055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.889203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.889256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.889627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.889658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.890003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.890032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.890240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.890271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.890630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.890659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.891002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.891032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.891396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.891428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.891657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.891688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.892042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.892072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.892425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.892456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.892796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.892825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.893168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.893200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.893292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.893320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.893614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.893642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.893850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.893879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.894221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.894253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.894575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.894605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.894933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-11-19 18:29:17.894963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-11-19 18:29:17.895290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.895321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.895574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.895602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.896009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.896039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.896389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.896420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.896647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.896679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.896910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.896940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.897292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.897323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.897545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.897574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.897930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.897959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.898324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.898355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.898699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.898730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.899078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.899108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.899495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.899532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.899744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.899773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.900105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.900135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.900476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.900505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.900721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.900751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.901094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.901124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.901492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.901523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.901614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.901643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.902135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.902260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.902621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.902660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.903060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.903091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.903521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.903613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.903885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.903923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.904419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.904512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.904777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.904817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.905055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.905086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.905452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.905484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.905821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.905851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.906195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.906226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.906552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.906582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.906953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.906983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.907324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.907359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.907571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.907601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.907893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.907923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.908128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.908156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-11-19 18:29:17.908553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-11-19 18:29:17.908583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.908961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.908991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.909195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.909226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.909464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.909493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.909831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.909861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.910206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.910237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.910568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.910598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.910950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.910980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.911239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.911274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.911638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.911668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.911884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.911913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.912269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.912300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.912610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.912641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.912967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.912997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.913344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.913375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.913754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.913790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.913880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.913909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.914305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.914336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.914676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.914707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.915031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.915061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.915407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.915440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.915671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.915700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.915906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.915935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.916233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.916264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.916608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.916638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.916842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.916871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.917208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.917239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.917597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.917628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.917959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.917988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.918244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.918274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.918625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.918655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.918999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.919029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-11-19 18:29:17.919370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-11-19 18:29:17.919401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.919745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.919775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.920117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.920146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.920494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.920523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.920854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.920884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.921231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.921261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.921484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.921513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.921737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.921767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.922103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.922133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.922389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.922419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.922756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.922787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.923005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.923036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.923378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.923410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.923729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.923758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.924006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.924039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.924391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.924422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.924736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.924767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.925108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.925138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.925503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.925536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.925862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.925893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.926099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.926129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.926336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.926368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.926742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.926773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.927117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.927154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.927380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.927410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.927705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.927735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.928059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.928089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.928442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.928474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.928687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.928717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.929058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.929087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.929319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.929349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.929690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.929720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.930063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.930094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.930455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.930487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.930829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.930859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.931226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.931259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.931591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.931620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.931974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-11-19 18:29:17.932004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-11-19 18:29:17.932338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.932370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.932565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.932594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.932936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.932966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.933362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.933394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.933733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.933763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.934108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.934138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.934503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.934535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.934756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.934784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.935137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.935176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.935525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.935556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.935898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.935928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.936281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.936313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.936653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.936684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.937025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.937056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.937383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.937413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.937757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.937787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.938126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.938156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.938498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.938529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.938871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.938901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.939116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.939145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.939507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.939539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.939890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.939921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.940253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.940294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.940388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.940418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.940740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.940770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.941113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.941149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.941511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.941541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.941754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.941783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.942012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.942042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.942405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.942436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.942778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.942808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.943167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.943198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.943542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.943571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.943915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.943944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.944282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.944314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.944534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.944564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.944773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.944802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-11-19 18:29:17.945103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-11-19 18:29:17.945132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.945484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.945515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.945872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.945903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.946247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.946278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.946646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.946677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.947012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.947041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.947405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.947436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.947786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.947816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.948179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.948210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.948429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.948459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.948779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.948809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.949043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.949071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.949422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.949454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.949795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.949824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.950189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.950219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.950551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.950581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.950926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.950955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.951304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.951334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.951561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.951591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.951911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.951943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.952285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.952316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.952662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.952692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.953034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.953063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.953281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.953311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.953655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.953685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.954009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.954039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.954395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.954426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.954753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.954783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.955007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.955043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.955243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.955275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.955633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.955662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.955890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.955919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.956215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.956246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.956585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.956615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.956971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.957000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.957368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.957399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.957733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.957762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.958114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.958145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-11-19 18:29:17.958414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-11-19 18:29:17.958445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.958674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.958703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.959041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.959071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.959325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.959360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.959697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.959728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.960071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.960101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.960451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.960483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.960833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.960864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.961208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.961239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.961567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.961598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.961821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.961851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.962212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.962243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.962460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.962490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.962840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.962870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.963236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.963268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.963628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.963658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.964012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.964041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.964395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.964427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.964768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.964798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.965151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.965192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.965548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.965578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.965792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.965822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.966178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.966209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.966494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.966522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.966728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.966759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.967104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.967134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.967475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.967507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.967838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.967869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.968221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.968251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.968451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.968480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.968802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.968838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.969056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.969085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.969324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.969355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.969565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.969595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.969944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.969974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.970304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.970335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.970699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.970729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.971078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-11-19 18:29:17.971107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-11-19 18:29:17.971461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.971491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.971581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.971610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.971905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.971935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.972275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.972307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.972637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.972668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.973011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.973041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.973387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.973418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.973620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.973651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.974012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.974041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.974336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.974368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.974561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.974592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.974838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.974872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.975196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.975227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.975568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.975599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.975831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.975866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.976194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.976226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.976580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.976609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.976942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.976971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.977304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.977335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.977536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.977566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.977891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.977921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.978268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.978300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.978656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.978686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.979033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.979063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.979388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.979420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.979761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.979791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.979998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.980028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.980383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.980414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.980743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.980773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.981104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.981134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.981496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.981526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.712 [2024-11-19 18:29:17.981876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.712 [2024-11-19 18:29:17.981907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.982290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.982320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.982668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.982698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.982953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.982983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.983321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.983352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.983445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.983474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.983525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e31e00 (9): Bad file descriptor 00:30:16.713 [2024-11-19 18:29:17.984197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.984288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.984652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.984691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.985058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.985090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.985438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.985528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.985828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.985867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.986218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.986265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.986619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.986650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.986989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.987019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.987387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.987441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.987789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.987820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.988150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.988195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.988537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.988568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.988869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.988900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.989121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.989151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.989434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.989465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.989813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.989844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.990177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.990209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.990567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.990596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.990800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.990830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.991173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.991206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.991512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.991540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.991883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.991914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.992257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.992290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.992638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.992667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.992866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.992896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.993182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.993231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.993545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.993578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.993929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.993959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.994287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.994318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.994669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.994699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.994895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.994925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.995154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.995196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.713 qpair failed and we were unable to recover it. 00:30:16.713 [2024-11-19 18:29:17.995569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.713 [2024-11-19 18:29:17.995599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.995939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.995970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.996204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.996237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.996537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.996565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.996912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.996943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.997293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.997324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.997668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.997698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.998040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.998071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.998379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.998410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.998761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.998792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.999133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.999172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.999379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.999409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:17.999774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:17.999804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.000109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.000140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.000476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.000507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.000854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.000885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.001144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.001189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.001579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.001616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.001839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.001872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.002187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.002219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.002420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.002449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.002759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.002790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.003140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.003189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.003547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.003577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.003898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.003929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.004277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.004309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.004650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.004680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.004901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.004934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.005276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.005307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.005648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.005677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.006042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.006072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.006406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.006438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.006780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.006812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.007185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.007217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.007429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.007459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.007796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.007826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.008176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.008208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.008538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.008567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.714 [2024-11-19 18:29:18.008905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.714 [2024-11-19 18:29:18.008934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.714 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.009277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.009308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.009636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.009666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.010017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.010048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.010394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.010427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.010766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.010796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.011040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.011079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.011404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.011437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.011794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.011824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.012155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.012195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.012534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.012564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.012892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.012921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.013252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.013283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.013643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.013672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.014001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.014031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.014256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.014287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.014595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.014627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.014950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.014980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.015328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.015359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.015589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.015622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.015828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.015858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.016222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.016253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.016491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.016520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.016848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.016880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.017221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.017252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.017592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.017622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.017849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.017883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.018214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.018245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.018591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.018621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.018958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.018988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.019318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.019350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.019683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.019713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.020042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.020072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.020310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.020347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.020674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.020705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.021076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.021106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.021448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.021479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.021692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.021726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.715 [2024-11-19 18:29:18.022061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.715 [2024-11-19 18:29:18.022091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.715 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.022318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.022349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.022676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.022708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.023050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.023080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.023442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.023473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.023812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.023843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.024186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.024216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.024584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.024615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.024944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.024974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.025317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.025349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.025579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.025608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.025958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.025988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.026316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.026348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.026564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.026594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.026796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.026824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.027200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.027233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.027554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.027585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.027950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.027980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.028315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.028346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.028699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.028729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.028951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.028981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.029305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.029336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.029539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.029568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.029921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.029951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.030168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.030199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.030565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.030596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.030942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.030972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.031213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.031247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.031593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.031624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.031714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.031742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3c0c0 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.032216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.032323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.032728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.032765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.033113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.033145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.033605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.033695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.034053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.034092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.034345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.034379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.034718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.034751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.034873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.034906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.716 [2024-11-19 18:29:18.035265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.716 [2024-11-19 18:29:18.035297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.716 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.035639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.035673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.036000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.036030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.036240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.036270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.036624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.036654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.036996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.037026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.037322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.037352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.037652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.037682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.038010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.038041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.038374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.038406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.038625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.038654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.038993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.039030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.039392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.039423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.039728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.039758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.040000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.040030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.040377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.040408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.040757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.040786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.041131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.041170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.041398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.041428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.041773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.041803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.042138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.042182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.042528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.042559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.042891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.042921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.043155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.043196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.043456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.043490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.043855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.043885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.044229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.044261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.044604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.044635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.044869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.044899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.045253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.045284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.045638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.045669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.045889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.045923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.046250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.046290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.046509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.717 [2024-11-19 18:29:18.046539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.717 qpair failed and we were unable to recover it. 00:30:16.717 [2024-11-19 18:29:18.046897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.046927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.047258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.047290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.047643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.047673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.047885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.047914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.048267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.048300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.048667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.048697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.049048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.049078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.049412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.049443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.049807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.049837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.050131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.050167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.050315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.050346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.050706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.050736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.050905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.050933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.051277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.051308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.051653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.051685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.051905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.051939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.052275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.052307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.052654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.052691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.053034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.053064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.053422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.053452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.053790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.053820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.054039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.054069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.054412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.054444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.054780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.054810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.055013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.055043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.055400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.055432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.055775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.055805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.056040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.056069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.056400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.056432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.056773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.056803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.057020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.057050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.057419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.057451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.057653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.057682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.058023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.058054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.058408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.058439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.058790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.058821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.059175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.059206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.059509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.718 [2024-11-19 18:29:18.059538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.718 qpair failed and we were unable to recover it. 00:30:16.718 [2024-11-19 18:29:18.059889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.059919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.060279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.060311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.060643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.060672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.060876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.060906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.061243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.061275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.061485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.061515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.061854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.061884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.062093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.062126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.062507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.062539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.062868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.062899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.063242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.063274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.063639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.063669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.064025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.064054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.064395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.064427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.064633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.064662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.064857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.064888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.065209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.065240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.065554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.065584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.065783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.065813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.066133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.066179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.066504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.066535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.066846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.066878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.067229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.067261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.067662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.067693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.067940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.067972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.068319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.068351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.068700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.068730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.069081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.069110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.069455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.069485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.069823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.069853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.070208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.070239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.070446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.070475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.070817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.070845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.071189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.071221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.071582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.071611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.071938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.071969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.072297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.072329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.719 qpair failed and we were unable to recover it. 00:30:16.719 [2024-11-19 18:29:18.072691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.719 [2024-11-19 18:29:18.072721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.073065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.073097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.073438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.073469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.073812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.073843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.074057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.074087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.074307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.074338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.074681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.074711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.075050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.075081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.075432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.075464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.075814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.075845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.076194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.076226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.076566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.076596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.076798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.076827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.077169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.077201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.077498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.077527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.077882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.077911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.078256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.078288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.078515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.078543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.078777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.078807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.079201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.079232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.079537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.079568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.079796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.079830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.080179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.080223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.080573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.080602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.080946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.080976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.081326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.081358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.081713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.081743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.082107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.082137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.082501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.082531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.082920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.082950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.083259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.083292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.083499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.083531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.083744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.083773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.084102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.084132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.084446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.084477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.084816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.084846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.085181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.085213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.085571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.085601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.720 qpair failed and we were unable to recover it. 00:30:16.720 [2024-11-19 18:29:18.085797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.720 [2024-11-19 18:29:18.085826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.086099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.086129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.086478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.086511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.086855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.086884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.087235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.087266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.087482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.087512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.087887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.087917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.088145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.088187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.088542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.088574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.088898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.088928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.089237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.089267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.089647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.089678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.090030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.090059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.090410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.090440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.090648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.090677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.090991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.091021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.091232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.091264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.091621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.091651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.092000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.092029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.092395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.092425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.092768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.092798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.093174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.093204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.093534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.093564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.093677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.093718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.093917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.093954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.094329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.094360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.094589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.094618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.094943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.094973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.095319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.095350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.095692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.095723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.096060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.096090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.096299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.096330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.096671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.721 [2024-11-19 18:29:18.096701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.721 qpair failed and we were unable to recover it. 00:30:16.721 [2024-11-19 18:29:18.097041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.097073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.097411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.097442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.097753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.097783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.098122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.098153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.098470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.098500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.098847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.098878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.099207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.099240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.099607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.099637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.099964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.099995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.100298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.100329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.100668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.100698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.101058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.101087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.101447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.101478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.101865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.101894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.102121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.102154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.102497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.102528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.102880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.102909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.103252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.103283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.103647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.103678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.104007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.104037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.104254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.104283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.104624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.104654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.105006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.105036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.105404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.105434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.105778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.105808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.106155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.106194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.106533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.106562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.106907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.106937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.107194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.107226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.107567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.107597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.107927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.107957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.108176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.108206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.108573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.108603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.108933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.108963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.109176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.109206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.109545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.109575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.109780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.109809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.110026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.722 [2024-11-19 18:29:18.110056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.722 qpair failed and we were unable to recover it. 00:30:16.722 [2024-11-19 18:29:18.110401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.110432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.110772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.110804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.111021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.111050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.111263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.111294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.111641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.111670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.111998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.112028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.112138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.112180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.112545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.112575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.112929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.112959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.113268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.113300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.113650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.113678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.114008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.114038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.114387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.114418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.114770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.114800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.115045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.115077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.115429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.115462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.115665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.115694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.115927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.115957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.116300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.116330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.116678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.116709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.117073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.117109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.117471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.117501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.117714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.117744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.118086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.118115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.118467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.118498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.118849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.118880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.119182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.119214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.119531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.119562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.119760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.119789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.120122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.120151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.120515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.120546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.120881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.120912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.121109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.121138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.121500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.121531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.121864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.121895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.122222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.122253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.122605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.122635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.723 [2024-11-19 18:29:18.123000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.723 [2024-11-19 18:29:18.123030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.723 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.123256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.123288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.123608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.123638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.123984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.124015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.124395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.124426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.124755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.124786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.125123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.125153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.125526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.125556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.125906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.125936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.126285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.126315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.126543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.126573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.126923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.126953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.127293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.127325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.127667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.127697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.128069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.128099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.128435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.128465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.128766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.128796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.129122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.129152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.129493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.129524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.129730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.129759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.129966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.129995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.130355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.130385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.130716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.130746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.131097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.131132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.131495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.131526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.131858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.131889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.132222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.132254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.132598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.132627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.132847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.132876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.133235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.133266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.133503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.133535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.133906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.133936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.134275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.134307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.134650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.134680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.135043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.135073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.135303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.135334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.135537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.135567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.135928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.135957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.724 [2024-11-19 18:29:18.136315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.724 [2024-11-19 18:29:18.136346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.724 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.136693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.136723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.137069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.137099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.137460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.137492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.137842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.137872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.138219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.138250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.138566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.138597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.138940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.138969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.139195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.139225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.139427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.139456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.139660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.139689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.140026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.140056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.140421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.140453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.140658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.140689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.141018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.141048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.141413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.141445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.141797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.141827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.142207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.142237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.142592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.142622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.142813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.142841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.143057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.143087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.143451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.143482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.143849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.143878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.144213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.144244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.144605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.144635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.144853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.144887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.145268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.145298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.145596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.145628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.145953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.145983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.146188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.146220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.146426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.146457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.146816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.146846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.147058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.147090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.147444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.147476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.147804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.147835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.148192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.148222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.148428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.148457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.148806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.725 [2024-11-19 18:29:18.148836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.725 qpair failed and we were unable to recover it. 00:30:16.725 [2024-11-19 18:29:18.149176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.149206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.149545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.149576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.149918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.149948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.150296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.150327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.150666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.150695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.151043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.151073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.151409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.151439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.151782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.151812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.152146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.152207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.152560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.152589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.152921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.152951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.153177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.153207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.153533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.153563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.153858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.153887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.154218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.154249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.154476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.154505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.154858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.154888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.155239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.155270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.155632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.155662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.156007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.156037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.156398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.156430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.156766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.156796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.157133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.157172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.157369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.157399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:16.726 [2024-11-19 18:29:18.157515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.726 [2024-11-19 18:29:18.157543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:16.726 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.157890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.157921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.158262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.158294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.158655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.158690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.158779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.158807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.159341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.159431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.159776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.159814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.160226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.160274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.160630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.160659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.160998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.161028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.161247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.161277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.161473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.161502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.161865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.161895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.162232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.162264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.162603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.162633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.162977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.163007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.163217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.163247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.163564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.163594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.163961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.163991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.001 [2024-11-19 18:29:18.164183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.001 [2024-11-19 18:29:18.164213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.001 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.164439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.164476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.164690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.164720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.165054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.165085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.165425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.165457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.165795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.165825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.166038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.166068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.166383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.166414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.166738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.166767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.167131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.167175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.167530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.167560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.167891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.167921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.168012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.168043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f575c000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.168549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.168641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.169016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.169053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.169497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.169589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.169884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.169922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.170358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.170450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.170775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.170812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.171010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.171040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.171248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.171283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.171664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.171695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.171998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.172028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.172380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.172413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.172754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.172796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.173148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.173192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.173422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.173452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.173813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.173843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.174210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.174242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.174588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.174619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.174960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.174990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.175212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.175243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.175608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.175638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.175984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.176015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.176325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.176356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.176680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.176710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.177080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.177110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.002 [2024-11-19 18:29:18.177449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.002 [2024-11-19 18:29:18.177482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.002 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.177814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.177844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.178191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.178222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.178580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.178610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.178842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.178871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.179198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.179230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.179583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.179612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.179941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.179971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.180317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.180348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.180564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.180593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.180951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.180981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.181186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.181216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.181584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.181614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.181947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.181976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.182345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.182376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.182708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.182738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.183087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.183117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.183486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.183519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.183751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.183780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.183985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.184014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.184327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.184359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.184736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.184765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.184960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.184989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.185352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.185384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.185713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.185743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.186144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.186183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.186524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.186554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.186886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.186923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.187257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.187288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.187641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.187671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.187882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.187910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.188248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.188280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.188642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.188672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.189025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.189054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.189290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.189321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.189646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.189676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.190023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.190054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.190405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.190436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.003 [2024-11-19 18:29:18.190628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.003 [2024-11-19 18:29:18.190657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.003 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.191001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.191031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.191383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.191413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.191741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.191771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.192111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.192142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.192498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.192529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.192856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.192886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.193108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.193137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.193378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.193409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.193742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.193773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.194118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.194149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.194468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.194497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.194852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.194883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.195226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.195258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.195592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.195621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.195974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.196004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.196102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.196133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.196455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.196484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.196831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.196861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.197213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.197244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.197549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.197582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.197930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.197960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.198314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.198345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.198678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.198709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.198920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.198948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.199327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.199359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.199696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.199726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.200054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.200083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.200304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.200334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.200664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.200700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.201036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.201067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.201403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.201435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.201780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.201811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.202072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.202107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.202476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.202508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.202837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.202868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.203134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.203175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.203529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.203560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.004 qpair failed and we were unable to recover it. 00:30:17.004 [2024-11-19 18:29:18.203885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.004 [2024-11-19 18:29:18.203914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.204262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.204293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.204635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.204665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.204869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.204899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.205238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.205269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.205495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.205525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.205877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.205907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.206282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.206315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.206547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.206575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.206930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.206961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.207325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.207357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.207571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.207599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.207909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.207940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.208289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.208321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.208662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.208691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.209036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.209065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.209410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.209442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.209787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.209817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.210037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.210068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.210310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.210341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.210693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.210723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.211070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.211100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.211459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.211490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.211826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.211856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.212207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.212239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.212469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.212498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.212719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.212749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.213094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.213123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.213482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.213515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.213733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.213762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.214059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.214087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.214427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.214471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.214709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.214742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.214973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.215003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.215258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.215289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.215622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.215652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.215972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.216001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.005 [2024-11-19 18:29:18.216213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.005 [2024-11-19 18:29:18.216244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.005 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.216589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.216619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.216928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.216958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.217290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.217321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.217657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.217687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.217891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.217921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.218151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.218191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.218558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.218587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.218933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.218964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.219309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.219340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.219700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.219730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.219822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.219851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.220289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.220382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.220638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.220677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.220902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.220940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.221437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.221528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.221845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.221883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.222238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.222272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.222475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.222506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.222848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.222880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.223155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.223198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.223559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.223591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.223952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.223983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.224224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.224256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.224614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.224645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.224969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.225000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.225219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.225252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.225550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.225582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.225905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.225935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.226179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.226212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.226551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.226581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.226912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.226944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.006 qpair failed and we were unable to recover it. 00:30:17.006 [2024-11-19 18:29:18.227250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.006 [2024-11-19 18:29:18.227280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.227625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.227656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.227982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.228018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.228340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.228372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.228704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.228734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.229064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.229094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.229404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.229435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.229671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.229703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.230022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.230052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.230405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.230436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.230759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.230789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.231120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.231150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.231509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.231539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.231874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.231906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.232239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.232269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.232481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.232511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.232841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.232872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.233222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.233253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.233560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.233590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.233918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.233948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.234271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.234303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.234619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.234649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.234849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.234878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.235230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.235262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.235468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.235497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.235838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.235867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.236232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.236263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.236615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.236647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.236991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.237021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.237460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.237494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.237710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.237740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.238037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.238066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.238264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.238294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.238630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.238661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.238909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.238937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.239282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.239313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.239712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.239742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.240072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.240101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.007 qpair failed and we were unable to recover it. 00:30:17.007 [2024-11-19 18:29:18.240443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.007 [2024-11-19 18:29:18.240474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.240670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.240700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.241038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.241068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.241455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.241488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.241579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.241615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.241850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.241881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.242208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.242239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.242580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.242611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.242954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.242984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.243333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.243364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.243722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.243751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.243990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.244024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.244349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.244381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.244728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.244757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.245081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.245112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.245334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.245366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.245711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.245740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.246089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.246119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.246530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.246562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.246888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.246918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.247263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.247295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.247596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.247626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.247846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.247875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.248129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.248171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.248481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.248512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.248845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.248876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.249230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.249260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.249572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.249603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.249807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.249836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.250185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.250216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.250584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.250614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.250964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.250995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.251224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.251254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.251589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.251619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.251710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.251738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5764000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.252253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.252348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.252620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.252658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.252986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.008 [2024-11-19 18:29:18.253017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.008 qpair failed and we were unable to recover it. 00:30:17.008 [2024-11-19 18:29:18.253374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.253407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.253635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.253664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.254000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.254030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.254370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.254401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.254744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.254775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.254981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.255010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.255373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.255417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.255756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.255786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.256156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.256214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.256607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.256638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.256991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.257023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.257366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.257398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.257629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.257658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.257904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.257934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.258178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.258210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.258583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.258612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.258947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.258977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.259365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.259397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.259750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.259780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.259992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.260021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.260252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.260282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.260663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.260692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.260922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.260953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.261179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.261211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.261510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.261539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.261877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.261908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.262177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.262208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.262553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.262582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.262795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.262824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.263223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.263255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.263611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.263641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.263946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.263976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.264306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.264338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.264692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.264722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.265076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.265106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.265445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.265476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.265693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.265723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.265989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.009 [2024-11-19 18:29:18.266019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.009 qpair failed and we were unable to recover it. 00:30:17.009 [2024-11-19 18:29:18.266252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.266283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.266620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.266650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.267003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.267034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.267410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.267441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.267788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.267819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.268076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.268105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.268444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.268475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.268702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.268737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.269077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.269114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.269502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.269532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.269730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.269759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.270149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.270196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.270577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.270607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.270942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.270974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.271308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.271340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.271697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.271727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.272072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.272102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.272470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.272501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.272842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.272873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.273175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.273206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.273564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.273594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.273816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.273846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.274192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.274223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.274515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.274554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.274897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.274927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.275177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.275208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.275549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.275579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.275784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.275814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.276024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.276055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.276356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.276387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.276727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.276758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.277084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.277115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.010 [2024-11-19 18:29:18.277438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.010 [2024-11-19 18:29:18.277470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.010 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.277673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.277703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.278028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.278058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.278409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.278441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.278650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.278680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.278910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.278940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.279276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.279307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.279659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.279690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.280039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.280069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.280427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.280459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.280816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.280847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.281189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.281220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.281579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.281608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.281806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.281835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.282175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.282207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.282425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.282454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.282794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.282824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.283168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.283201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.283509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.283538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.283890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.283920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.284287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.284319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.284673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.284703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.011 [2024-11-19 18:29:18.285048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.285079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:17.011 [2024-11-19 18:29:18.285449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.285481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b9 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.011 0 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.011 [2024-11-19 18:29:18.285825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.285855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.011 [2024-11-19 18:29:18.286211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.286242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.286456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.286485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.286828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.286859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.287188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.287219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.287555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.287584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.287928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.287958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.288178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.288209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.288562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.288593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.288921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.288950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.289199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.289232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.289608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.289639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.011 [2024-11-19 18:29:18.289964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.011 [2024-11-19 18:29:18.289996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.011 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.290340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.290370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.290689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.290718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.291070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.291102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.291455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.291486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.291729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.291759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.291995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.292031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.292460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.292494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.292700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.292730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.293063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.293093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.293464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.293495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.293845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.293875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.294234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.294266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.294562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.294592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.294925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.294955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.295309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.295343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.295696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.295727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.296068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.296099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.296327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.296363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.296595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.296625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.296979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.297009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.297334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.297366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.297715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.297749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.297901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.297930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.298139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.298180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.298532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.298561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.298918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.298948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.299207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.299238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.299510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.299540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.299889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.299919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.300266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.300299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.300663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.300693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.301044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.301075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.301294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.301325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.301661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.301691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.301916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.301945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.302277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.302308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.302514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.012 [2024-11-19 18:29:18.302544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.012 qpair failed and we were unable to recover it. 00:30:17.012 [2024-11-19 18:29:18.302879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.302909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.303274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.303305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.303618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.303650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.304012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.304042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.304401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.304433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.304772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.304803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.305144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.305199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.305555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.305588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.305931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.305963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.306331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.306362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.306603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.306632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.306999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.307028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.307392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.307423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.307673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.307707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.308029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.308059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.308412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.308445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.308784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.308814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.309043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.309073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.309402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.309434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.309787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.309817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.310055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.310094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.310431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.310462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.310814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.310845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.311198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.311230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.311573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.311604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.311970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.312001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.312330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.312362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.312705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.312737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.313080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.313111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.313449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.313480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.313831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.313861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.314205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.314237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.314556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.314585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.314799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.314829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.315173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.315205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.315545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.315574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.315922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.315952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.316302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.013 [2024-11-19 18:29:18.316334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.013 qpair failed and we were unable to recover it. 00:30:17.013 [2024-11-19 18:29:18.316693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.316723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.317061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.317092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.317336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.317371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.317707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.317737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.318095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.318125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.318498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.318530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.318907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.318938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.319239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.319270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.319626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.319656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.319989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.320020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.320224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.320255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.320619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.320648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.320974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.321005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.321375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.321406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.321749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.321780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.322017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.322047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.322296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.322329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.322668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.322699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.323066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.323095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.323420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.323453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.323792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.323822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.324183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.324214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.014 [2024-11-19 18:29:18.324548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.324580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.014 [2024-11-19 18:29:18.324905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.324936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.014 [2024-11-19 18:29:18.325304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.325335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.014 [2024-11-19 18:29:18.325567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.325597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.325938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.325967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.326303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.326335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.326677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.326707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.327035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.327065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.327407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.327438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.327780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.327809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.328156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.328198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.328436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.328465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.328794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.328824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.329152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.014 [2024-11-19 18:29:18.329193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.014 qpair failed and we were unable to recover it. 00:30:17.014 [2024-11-19 18:29:18.329403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.329433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.329775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.329805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.330148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.330188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.330398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.330426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.330773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.330803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.331029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.331062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.331399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.331431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.331771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.331801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.332148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.332187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.332523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.332553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.332891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.332921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.333262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.333299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.333650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.333680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.334029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.334059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.334418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.334450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.334807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.334837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.335194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.335225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.335570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.335600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.335812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.335842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.336075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.336108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.336480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.336511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.336860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.336891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.337224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.337255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.337603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.337633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.337971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.338002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.338374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.338405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.338741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.338770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.338998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.339027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.339353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.339385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.339731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.339761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.340093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.340123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.340520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.340550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.340758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.340787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.015 [2024-11-19 18:29:18.341126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.015 [2024-11-19 18:29:18.341156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.015 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.341521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.341551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.341879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.341908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.342284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.342315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.342664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.342694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.343032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.343063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.343368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.343400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.343733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.343763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.343973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.344002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.344342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.344373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.344714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.344744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.345075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.345105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.345478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.345510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.345842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.345872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.346211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.346242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.346544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.346573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.346665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.346696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.347016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.347045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.347386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.347423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.347637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.347669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.348001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.348031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.348367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.348398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.348740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.348770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.349096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.349126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.349521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.349552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.349873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.349903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.350106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.350136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.350368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.350401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.350755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.350786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.351129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.351170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.351396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.351426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.351766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.351797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.352147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.352188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.352423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.352456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.352776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.352806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.353140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.353181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.353561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.353591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.353939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.353969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.016 qpair failed and we were unable to recover it. 00:30:17.016 [2024-11-19 18:29:18.354212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.016 [2024-11-19 18:29:18.354243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.354458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.354488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.354625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.354654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.355041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.355071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.355406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.355437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.355782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.355812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.356130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.356181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.356535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.356566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.356893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.356922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.357270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.357303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.357686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.357717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.358043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.358073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.358296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.358327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.358549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.358578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.358916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.358947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.359311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.359342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.359668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.359698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.360049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.360079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.360446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.360476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.360712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.360741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.361094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.361129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.361397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.361430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.361773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.361803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.361999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.362028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.362356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.362388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.362732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.362763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.363101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.363130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.363339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.363369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.363708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.363738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.363983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.364012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.364365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.364396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.364734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.364765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.365108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.365137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.365493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.365524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.365868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.365899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.366119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.366149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.366500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 Malloc0 00:30:17.017 [2024-11-19 18:29:18.366531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.366886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.017 [2024-11-19 18:29:18.366918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.017 qpair failed and we were unable to recover it. 00:30:17.017 [2024-11-19 18:29:18.367280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.018 [2024-11-19 18:29:18.367312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.367642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.367672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.018 [2024-11-19 18:29:18.368008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.368038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.018 [2024-11-19 18:29:18.368272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.368302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.368611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.368641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.368938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.368969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.369189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.369219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.369584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.369615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.369972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.370002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.370379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.370409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.370763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.370793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.371173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.371205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.371429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.371460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.371590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.371620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.371827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.371856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.372085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.372114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.372462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.372492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.372701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.372729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.373063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.373092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.373393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.373425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.373635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.373670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.373936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.018 [2024-11-19 18:29:18.374002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.374031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.374401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.374433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.374774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.374805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.375135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.375173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.375496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.375526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.375872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.375902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.376233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.376264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.376563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.376592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.376938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.376968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.377209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.377243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.377453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.377483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.377809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.377839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.378171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.378202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.378567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.378596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.378920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.018 [2024-11-19 18:29:18.378950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.018 qpair failed and we were unable to recover it. 00:30:17.018 [2024-11-19 18:29:18.379301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.379333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.379561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.379591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.379966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.379997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.380247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.380278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.380629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.380658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.380988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.381018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.381388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.381419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.381774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.381805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.382194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.382227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.382594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.382624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.019 [2024-11-19 18:29:18.382981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.383016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.019 [2024-11-19 18:29:18.383383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.383415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.019 [2024-11-19 18:29:18.383734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.383764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.019 [2024-11-19 18:29:18.383994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.384024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.384263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.384294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.384652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.384681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.385029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.385059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.385394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.385426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.385788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.385817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.386036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.386070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.386412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.386444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.386782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.386812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.387013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.387053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.387401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.387431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.387785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.387814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.388155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.388195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.388419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.388449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.388660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.388689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.389030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.389059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.389404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.389435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.389779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.019 [2024-11-19 18:29:18.389809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.019 qpair failed and we were unable to recover it. 00:30:17.019 [2024-11-19 18:29:18.390136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.390189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.390543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.390572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.390902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.390932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.391275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.391307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.391660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.391689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.392037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.392067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.392408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.392440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.392780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.392810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.393151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.393191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.393554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.393584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.393815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.393844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.394179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.394210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.394553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.394583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.394805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.394833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.020 [2024-11-19 18:29:18.395061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.395090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.020 [2024-11-19 18:29:18.395402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.395433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.020 [2024-11-19 18:29:18.395832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.395861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.020 [2024-11-19 18:29:18.396195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.396228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.396583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.396613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.396946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.396976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.397183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.397213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.397522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.397551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.397896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.397925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.398291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.398323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.398539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.398568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.398939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.398968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.399326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.399357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.399694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.399724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.399961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.399990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.400325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.400356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.400714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.400744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.401086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.401116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.401483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.401514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.401855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.401885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.402111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.402140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.402502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.020 [2024-11-19 18:29:18.402533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.020 qpair failed and we were unable to recover it. 00:30:17.020 [2024-11-19 18:29:18.402868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.402898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.403230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.403261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.403625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.403655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.403986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.404016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.404374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.404404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.404748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.404779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.404997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.405026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.405405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.405436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.405776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.405805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.406194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.406225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.406470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.406499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.406831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.406860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.021 [2024-11-19 18:29:18.407204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.407235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.021 [2024-11-19 18:29:18.407539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.407569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.021 [2024-11-19 18:29:18.407898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.407928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.021 [2024-11-19 18:29:18.408273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.408304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.408528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.408561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.408867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.408897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.409223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.409271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.409491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.409520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.409878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.409908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.410268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.410299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.410657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.410687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.411014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.411044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.411388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.411420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.411759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.411788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.411960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.411988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.412327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.412358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.412714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.412744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.412961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.412990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.413341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.413372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.413580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.413610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.413964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:17.021 [2024-11-19 18:29:18.413994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5758000b90 with addr=10.0.0.2, port=4420 00:30:17.021 qpair failed and we were unable to recover it. 00:30:17.021 [2024-11-19 18:29:18.414147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.021 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.021 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:17.021 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.021 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.021 [2024-11-19 18:29:18.424785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.021 [2024-11-19 18:29:18.424930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.021 [2024-11-19 18:29:18.424974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.022 [2024-11-19 18:29:18.424997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.022 [2024-11-19 18:29:18.425017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.022 [2024-11-19 18:29:18.425069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.022 qpair failed and we were unable to recover it. 00:30:17.022 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.022 18:29:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2177761 00:30:17.022 [2024-11-19 18:29:18.434765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.022 [2024-11-19 18:29:18.434842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.022 [2024-11-19 18:29:18.434868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.022 [2024-11-19 18:29:18.434884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.022 [2024-11-19 18:29:18.434896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.022 [2024-11-19 18:29:18.434925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.022 qpair failed and we were unable to recover it. 00:30:17.022 [2024-11-19 18:29:18.444743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.022 [2024-11-19 18:29:18.444829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.022 [2024-11-19 18:29:18.444849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.022 [2024-11-19 18:29:18.444861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.022 [2024-11-19 18:29:18.444870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.022 [2024-11-19 18:29:18.444891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.022 qpair failed and we were unable to recover it. 00:30:17.022 [2024-11-19 18:29:18.454700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.022 [2024-11-19 18:29:18.454784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.022 [2024-11-19 18:29:18.454798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.022 [2024-11-19 18:29:18.454806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.022 [2024-11-19 18:29:18.454813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.022 [2024-11-19 18:29:18.454829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.022 qpair failed and we were unable to recover it. 00:30:17.308 [2024-11-19 18:29:18.464727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.308 [2024-11-19 18:29:18.464782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.308 [2024-11-19 18:29:18.464795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.308 [2024-11-19 18:29:18.464803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.308 [2024-11-19 18:29:18.464810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.308 [2024-11-19 18:29:18.464825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-11-19 18:29:18.474742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.308 [2024-11-19 18:29:18.474808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.308 [2024-11-19 18:29:18.474822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.308 [2024-11-19 18:29:18.474829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.308 [2024-11-19 18:29:18.474835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.308 [2024-11-19 18:29:18.474851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-11-19 18:29:18.484771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.308 [2024-11-19 18:29:18.484817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.308 [2024-11-19 18:29:18.484830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.308 [2024-11-19 18:29:18.484838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.308 [2024-11-19 18:29:18.484844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.308 [2024-11-19 18:29:18.484859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-11-19 18:29:18.494762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.308 [2024-11-19 18:29:18.494824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.308 [2024-11-19 18:29:18.494849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.308 [2024-11-19 18:29:18.494858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.308 [2024-11-19 18:29:18.494865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.308 [2024-11-19 18:29:18.494886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-11-19 18:29:18.504854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.308 [2024-11-19 18:29:18.504944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.308 [2024-11-19 18:29:18.504970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.308 [2024-11-19 18:29:18.504980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.308 [2024-11-19 18:29:18.504987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.308 [2024-11-19 18:29:18.505007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-11-19 18:29:18.514860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.308 [2024-11-19 18:29:18.514944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.308 [2024-11-19 18:29:18.514970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.308 [2024-11-19 18:29:18.514979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.308 [2024-11-19 18:29:18.514986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.308 [2024-11-19 18:29:18.515006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.308 [2024-11-19 18:29:18.524892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.308 [2024-11-19 18:29:18.524952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.308 [2024-11-19 18:29:18.524967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.308 [2024-11-19 18:29:18.524974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.308 [2024-11-19 18:29:18.524981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.308 [2024-11-19 18:29:18.524996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.308 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.534870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.534916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.534930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.534942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.534948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.534964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.544937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.544993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.545007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.545014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.545021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.545035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.554979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.555070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.555084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.555092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.555098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.555113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.564975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.565031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.565044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.565051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.565058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.565073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.574964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.575011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.575026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.575033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.575041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.575062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.585033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.585083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.585097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.585105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.585112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.585127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.595063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.595115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.595129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.595136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.595142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.595161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.605101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.605155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.605173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.605181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.605187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.605202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.615098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.615148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.615164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.615172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.615178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.615193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.625267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.625332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.625345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.625352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.625359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.625374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.635214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.635266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.635280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.635287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.635294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.635309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.645172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.645237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.645250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.645258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.645264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.645279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.309 [2024-11-19 18:29:18.655080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.309 [2024-11-19 18:29:18.655130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.309 [2024-11-19 18:29:18.655143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.309 [2024-11-19 18:29:18.655151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.309 [2024-11-19 18:29:18.655162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.309 [2024-11-19 18:29:18.655178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.309 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.665256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.665309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.665322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.665333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.665340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.665355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.675238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.675324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.675336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.675344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.675351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.675366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.685357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.685422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.685436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.685443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.685449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.685464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.695285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.695329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.695342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.695349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.695356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.695371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.705399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.705465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.705478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.705486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.705493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.705511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.715285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.715337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.715350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.715357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.715364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.715378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.725404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.725487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.725500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.725508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.725515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.725530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.735401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.735452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.735465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.735473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.735479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.735494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.745465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.745513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.745526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.745534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.745540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.745555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.755467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.755517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.755530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.755537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.755544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.755558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.310 [2024-11-19 18:29:18.765517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.310 [2024-11-19 18:29:18.765568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.310 [2024-11-19 18:29:18.765581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.310 [2024-11-19 18:29:18.765588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.310 [2024-11-19 18:29:18.765595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.310 [2024-11-19 18:29:18.765609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.310 qpair failed and we were unable to recover it. 00:30:17.573 [2024-11-19 18:29:18.775507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.573 [2024-11-19 18:29:18.775553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.573 [2024-11-19 18:29:18.775566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.573 [2024-11-19 18:29:18.775573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.573 [2024-11-19 18:29:18.775580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.573 [2024-11-19 18:29:18.775595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.573 qpair failed and we were unable to recover it. 00:30:17.573 [2024-11-19 18:29:18.785565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.573 [2024-11-19 18:29:18.785645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.573 [2024-11-19 18:29:18.785658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.573 [2024-11-19 18:29:18.785665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.573 [2024-11-19 18:29:18.785673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.573 [2024-11-19 18:29:18.785687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.573 qpair failed and we were unable to recover it. 00:30:17.573 [2024-11-19 18:29:18.795556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.573 [2024-11-19 18:29:18.795606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.573 [2024-11-19 18:29:18.795622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.795630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.795636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.795651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.805609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.805663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.805676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.805683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.805690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.805704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.815477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.815525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.815538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.815545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.815552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.815566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.825729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.825805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.825818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.825825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.825832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.825847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.835664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.835716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.835729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.835737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.835750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.835765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.845701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.845755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.845768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.845776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.845782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.845796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.855679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.855731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.855743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.855751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.855757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.855772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.865794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.865897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.865911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.865919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.865926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.865940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.875774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.875826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.875851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.875860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.875868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.875889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.885830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.885884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.885909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.574 [2024-11-19 18:29:18.885918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.574 [2024-11-19 18:29:18.885925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.574 [2024-11-19 18:29:18.885946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.574 qpair failed and we were unable to recover it. 00:30:17.574 [2024-11-19 18:29:18.895701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.574 [2024-11-19 18:29:18.895752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.574 [2024-11-19 18:29:18.895767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.895774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.895781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.895797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.905918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.905977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.905991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.905998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.906005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.906020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.915914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.915958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.915972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.915979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.915986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.916001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.925948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.926000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.926017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.926025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.926032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.926046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.935947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.935996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.936009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.936017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.936023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.936038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.945999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.946051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.946064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.946071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.946078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.946092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.956001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.956053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.956066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.956073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.956080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.956094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.966081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.966164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.966178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.966186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.966196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.966211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.976017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.976069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.976082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.976090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.976096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.976111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.986120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.986171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.986184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.986192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.986198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.986213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:18.996013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:18.996066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.575 [2024-11-19 18:29:18.996078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.575 [2024-11-19 18:29:18.996086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.575 [2024-11-19 18:29:18.996092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.575 [2024-11-19 18:29:18.996107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.575 qpair failed and we were unable to recover it. 00:30:17.575 [2024-11-19 18:29:19.006235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.575 [2024-11-19 18:29:19.006303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.576 [2024-11-19 18:29:19.006316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.576 [2024-11-19 18:29:19.006324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.576 [2024-11-19 18:29:19.006330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.576 [2024-11-19 18:29:19.006345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.576 qpair failed and we were unable to recover it. 00:30:17.576 [2024-11-19 18:29:19.016169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.576 [2024-11-19 18:29:19.016223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.576 [2024-11-19 18:29:19.016236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.576 [2024-11-19 18:29:19.016244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.576 [2024-11-19 18:29:19.016250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.576 [2024-11-19 18:29:19.016265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.576 qpair failed and we were unable to recover it. 00:30:17.576 [2024-11-19 18:29:19.026236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.576 [2024-11-19 18:29:19.026286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.576 [2024-11-19 18:29:19.026300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.576 [2024-11-19 18:29:19.026307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.576 [2024-11-19 18:29:19.026314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.576 [2024-11-19 18:29:19.026328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.576 qpair failed and we were unable to recover it. 00:30:17.576 [2024-11-19 18:29:19.036249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.576 [2024-11-19 18:29:19.036307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.576 [2024-11-19 18:29:19.036321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.576 [2024-11-19 18:29:19.036328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.576 [2024-11-19 18:29:19.036334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.576 [2024-11-19 18:29:19.036349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.576 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.046273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.046357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.046370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.046378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.046384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.046399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.056277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.056330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.056343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.056351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.056358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.056372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.066328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.066379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.066392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.066399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.066405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.066420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.076320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.076367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.076381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.076389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.076396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.076410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.086387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.086439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.086452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.086460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.086466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.086481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.096380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.096427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.096441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.096453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.096461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.096476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.106449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.106502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.106515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.106522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.106529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.106544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.116431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.116492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.116505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.116512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.116519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.116533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.126478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.126568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.126581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.126589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.126596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.126611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.136485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.136533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.136547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.136555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.136562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.136581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.839 qpair failed and we were unable to recover it. 00:30:17.839 [2024-11-19 18:29:19.146536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.839 [2024-11-19 18:29:19.146590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.839 [2024-11-19 18:29:19.146603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.839 [2024-11-19 18:29:19.146611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.839 [2024-11-19 18:29:19.146618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.839 [2024-11-19 18:29:19.146633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.156551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.156601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.156614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.156621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.156628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.156642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.166636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.166692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.166705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.166713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.166720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.166735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.176572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.176619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.176632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.176639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.176646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.176660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.186661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.186733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.186746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.186754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.186760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.186775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.196647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.196699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.196712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.196719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.196726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.196741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.206672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.206718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.206731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.206739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.206745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.206760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.216686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.216734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.216747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.216755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.216761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.216776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.226763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.226824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.226841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.226848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.226854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.226869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.236774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.236827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.236864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.236872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.236879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.236903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.246677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.246728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.246742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.246750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.246757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.246772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.256761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.256807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.256821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.256828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.256835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.256850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.266880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.266937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.266950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.266958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.266965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.266985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.840 qpair failed and we were unable to recover it. 00:30:17.840 [2024-11-19 18:29:19.276880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.840 [2024-11-19 18:29:19.276932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.840 [2024-11-19 18:29:19.276945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.840 [2024-11-19 18:29:19.276953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.840 [2024-11-19 18:29:19.276959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.840 [2024-11-19 18:29:19.276974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.841 qpair failed and we were unable to recover it. 00:30:17.841 [2024-11-19 18:29:19.286894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.841 [2024-11-19 18:29:19.286947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.841 [2024-11-19 18:29:19.286961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.841 [2024-11-19 18:29:19.286968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.841 [2024-11-19 18:29:19.286975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.841 [2024-11-19 18:29:19.286990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.841 qpair failed and we were unable to recover it. 00:30:17.841 [2024-11-19 18:29:19.296905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.841 [2024-11-19 18:29:19.296955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.841 [2024-11-19 18:29:19.296968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.841 [2024-11-19 18:29:19.296976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.841 [2024-11-19 18:29:19.296983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:17.841 [2024-11-19 18:29:19.296999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:17.841 qpair failed and we were unable to recover it. 00:30:18.103 [2024-11-19 18:29:19.306977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.307029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.307043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.307050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.307057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.307071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.317017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.317066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.317080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.317087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.317094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.317109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.327038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.327086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.327100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.327107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.327114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.327129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.337023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.337070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.337084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.337091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.337097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.337113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.347095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.347149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.347167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.347174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.347181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.347196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.357089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.357144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.357164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.357172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.357178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.357193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.367145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.367204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.367217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.367224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.367230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.367245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.377133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.377179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.377192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.377199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.377206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.377221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.387182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.387231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.387243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.387251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.387257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.387272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.397216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.397311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.397325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.397332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.397343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.397357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.407235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.407321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.407334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.407341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.407348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.407363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.417240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.417290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.417303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.417310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.417317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.417332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.104 [2024-11-19 18:29:19.427306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.104 [2024-11-19 18:29:19.427361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.104 [2024-11-19 18:29:19.427374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.104 [2024-11-19 18:29:19.427381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.104 [2024-11-19 18:29:19.427388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.104 [2024-11-19 18:29:19.427403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.104 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.437316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.437362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.437375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.437382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.437389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.437404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.447348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.447399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.447412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.447420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.447427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.447441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.457344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.457391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.457405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.457412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.457419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.457434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.467410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.467458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.467471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.467478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.467485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.467500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.477315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.477363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.477377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.477384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.477391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.477406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.487460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.487505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.487521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.487529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.487536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.487552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.497483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.497533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.497545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.497553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.497560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.497575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.507508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.507560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.507572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.507580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.507586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.507601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.517522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.517575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.517590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.517597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.517604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.517623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.527545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.527591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.527605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.527616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.527623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.527638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.537545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.537595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.537608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.537616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.537623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.537637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.547612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.547665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.547679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.547686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.547693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.547708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.105 [2024-11-19 18:29:19.557641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.105 [2024-11-19 18:29:19.557689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.105 [2024-11-19 18:29:19.557702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.105 [2024-11-19 18:29:19.557710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.105 [2024-11-19 18:29:19.557716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.105 [2024-11-19 18:29:19.557731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.105 qpair failed and we were unable to recover it. 00:30:18.106 [2024-11-19 18:29:19.567668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.106 [2024-11-19 18:29:19.567715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.106 [2024-11-19 18:29:19.567729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.106 [2024-11-19 18:29:19.567736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.106 [2024-11-19 18:29:19.567742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.106 [2024-11-19 18:29:19.567758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.106 qpair failed and we were unable to recover it. 00:30:18.368 [2024-11-19 18:29:19.577666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.368 [2024-11-19 18:29:19.577711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.368 [2024-11-19 18:29:19.577725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.368 [2024-11-19 18:29:19.577732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.368 [2024-11-19 18:29:19.577739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.368 [2024-11-19 18:29:19.577754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-11-19 18:29:19.587724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.368 [2024-11-19 18:29:19.587771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.368 [2024-11-19 18:29:19.587784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.368 [2024-11-19 18:29:19.587792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.368 [2024-11-19 18:29:19.587798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.368 [2024-11-19 18:29:19.587813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-11-19 18:29:19.597746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.368 [2024-11-19 18:29:19.597794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.368 [2024-11-19 18:29:19.597808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.368 [2024-11-19 18:29:19.597815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.368 [2024-11-19 18:29:19.597822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.368 [2024-11-19 18:29:19.597837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-11-19 18:29:19.607779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.368 [2024-11-19 18:29:19.607830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.368 [2024-11-19 18:29:19.607843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.368 [2024-11-19 18:29:19.607851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.368 [2024-11-19 18:29:19.607857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.368 [2024-11-19 18:29:19.607872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-11-19 18:29:19.617761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.368 [2024-11-19 18:29:19.617812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.368 [2024-11-19 18:29:19.617825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.368 [2024-11-19 18:29:19.617832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.368 [2024-11-19 18:29:19.617839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.368 [2024-11-19 18:29:19.617854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-11-19 18:29:19.627839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.368 [2024-11-19 18:29:19.627894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.368 [2024-11-19 18:29:19.627907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.368 [2024-11-19 18:29:19.627915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.368 [2024-11-19 18:29:19.627921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.368 [2024-11-19 18:29:19.627936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-11-19 18:29:19.637870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.368 [2024-11-19 18:29:19.637917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.368 [2024-11-19 18:29:19.637930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.368 [2024-11-19 18:29:19.637937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.368 [2024-11-19 18:29:19.637944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.368 [2024-11-19 18:29:19.637959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.368 qpair failed and we were unable to recover it. 00:30:18.368 [2024-11-19 18:29:19.647860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.368 [2024-11-19 18:29:19.647911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.368 [2024-11-19 18:29:19.647936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.647945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.647952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.647973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.657773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.657823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.657839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.657852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.657859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.657876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.667952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.668005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.668019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.668026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.668033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.668048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.677972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.678025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.678039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.678046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.678053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.678068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.688002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.688050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.688063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.688071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.688078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.688092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.697990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.698041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.698054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.698062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.698068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.698086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.708053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.708112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.708126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.708133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.708140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.708155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.717950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.718024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.718038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.718046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.718053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.718069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.728103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.728153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.728172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.728179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.728186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.728201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.737964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.738012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.738025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.738033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.738039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.738054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.748178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.748242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.748256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.748263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.748270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.748285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.758192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.758242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.758255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.758263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.758269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.758284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.768201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.768273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.369 [2024-11-19 18:29:19.768286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.369 [2024-11-19 18:29:19.768294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.369 [2024-11-19 18:29:19.768300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.369 [2024-11-19 18:29:19.768315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.369 qpair failed and we were unable to recover it. 00:30:18.369 [2024-11-19 18:29:19.778126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.369 [2024-11-19 18:29:19.778179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.370 [2024-11-19 18:29:19.778192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.370 [2024-11-19 18:29:19.778200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.370 [2024-11-19 18:29:19.778206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.370 [2024-11-19 18:29:19.778221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.370 qpair failed and we were unable to recover it. 00:30:18.370 [2024-11-19 18:29:19.788268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.370 [2024-11-19 18:29:19.788329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.370 [2024-11-19 18:29:19.788345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.370 [2024-11-19 18:29:19.788353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.370 [2024-11-19 18:29:19.788359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.370 [2024-11-19 18:29:19.788374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.370 qpair failed and we were unable to recover it. 00:30:18.370 [2024-11-19 18:29:19.798298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.370 [2024-11-19 18:29:19.798349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.370 [2024-11-19 18:29:19.798362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.370 [2024-11-19 18:29:19.798369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.370 [2024-11-19 18:29:19.798376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.370 [2024-11-19 18:29:19.798391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.370 qpair failed and we were unable to recover it. 00:30:18.370 [2024-11-19 18:29:19.808338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.370 [2024-11-19 18:29:19.808398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.370 [2024-11-19 18:29:19.808411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.370 [2024-11-19 18:29:19.808418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.370 [2024-11-19 18:29:19.808425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.370 [2024-11-19 18:29:19.808439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.370 qpair failed and we were unable to recover it. 00:30:18.370 [2024-11-19 18:29:19.818302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.370 [2024-11-19 18:29:19.818351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.370 [2024-11-19 18:29:19.818364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.370 [2024-11-19 18:29:19.818371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.370 [2024-11-19 18:29:19.818378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.370 [2024-11-19 18:29:19.818393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.370 qpair failed and we were unable to recover it. 00:30:18.370 [2024-11-19 18:29:19.828398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.370 [2024-11-19 18:29:19.828457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.370 [2024-11-19 18:29:19.828470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.370 [2024-11-19 18:29:19.828477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.370 [2024-11-19 18:29:19.828484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.370 [2024-11-19 18:29:19.828502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.370 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.838459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.838515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.632 [2024-11-19 18:29:19.838528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.632 [2024-11-19 18:29:19.838536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.632 [2024-11-19 18:29:19.838542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.632 [2024-11-19 18:29:19.838558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.632 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.848441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.848498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.632 [2024-11-19 18:29:19.848511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.632 [2024-11-19 18:29:19.848518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.632 [2024-11-19 18:29:19.848524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.632 [2024-11-19 18:29:19.848538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.632 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.858399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.858456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.632 [2024-11-19 18:29:19.858469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.632 [2024-11-19 18:29:19.858477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.632 [2024-11-19 18:29:19.858484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.632 [2024-11-19 18:29:19.858498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.632 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.868371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.868422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.632 [2024-11-19 18:29:19.868436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.632 [2024-11-19 18:29:19.868444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.632 [2024-11-19 18:29:19.868451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.632 [2024-11-19 18:29:19.868466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.632 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.878492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.878543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.632 [2024-11-19 18:29:19.878557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.632 [2024-11-19 18:29:19.878564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.632 [2024-11-19 18:29:19.878571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.632 [2024-11-19 18:29:19.878586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.632 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.888509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.888566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.632 [2024-11-19 18:29:19.888580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.632 [2024-11-19 18:29:19.888587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.632 [2024-11-19 18:29:19.888594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.632 [2024-11-19 18:29:19.888609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.632 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.898522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.898573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.632 [2024-11-19 18:29:19.898586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.632 [2024-11-19 18:29:19.898593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.632 [2024-11-19 18:29:19.898600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.632 [2024-11-19 18:29:19.898614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.632 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.908595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.908678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.632 [2024-11-19 18:29:19.908691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.632 [2024-11-19 18:29:19.908698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.632 [2024-11-19 18:29:19.908705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.632 [2024-11-19 18:29:19.908719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.632 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.918604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.918652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.632 [2024-11-19 18:29:19.918668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.632 [2024-11-19 18:29:19.918675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.632 [2024-11-19 18:29:19.918682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.632 [2024-11-19 18:29:19.918696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.632 qpair failed and we were unable to recover it. 00:30:18.632 [2024-11-19 18:29:19.928635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.632 [2024-11-19 18:29:19.928681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:19.928694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:19.928701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:19.928708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:19.928722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:19.938591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:19.938637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:19.938650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:19.938658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:19.938664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:19.938678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:19.948658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:19.948753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:19.948766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:19.948773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:19.948781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:19.948796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:19.958682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:19.958730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:19.958743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:19.958750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:19.958760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:19.958775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:19.968742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:19.968794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:19.968807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:19.968814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:19.968821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:19.968835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:19.978744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:19.978791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:19.978805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:19.978812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:19.978818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:19.978833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:19.988781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:19.988845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:19.988858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:19.988866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:19.988872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:19.988887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:19.998814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:19.998863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:19.998875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:19.998883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:19.998889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:19.998903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:20.008862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:20.008948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:20.008973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:20.008983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:20.008991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:20.009011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:20.018845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:20.018899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:20.018924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:20.018933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:20.018940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:20.018960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:20.028804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:20.028859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:20.028879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:20.028887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:20.028894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:20.028911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:20.038946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:20.038993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:20.039007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:20.039015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:20.039022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:20.039037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:20.048969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.633 [2024-11-19 18:29:20.049021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.633 [2024-11-19 18:29:20.049039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.633 [2024-11-19 18:29:20.049047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.633 [2024-11-19 18:29:20.049054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.633 [2024-11-19 18:29:20.049070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.633 qpair failed and we were unable to recover it. 00:30:18.633 [2024-11-19 18:29:20.058953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.634 [2024-11-19 18:29:20.059006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.634 [2024-11-19 18:29:20.059019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.634 [2024-11-19 18:29:20.059027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.634 [2024-11-19 18:29:20.059033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.634 [2024-11-19 18:29:20.059048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.634 qpair failed and we were unable to recover it. 00:30:18.634 [2024-11-19 18:29:20.069019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.634 [2024-11-19 18:29:20.069078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.634 [2024-11-19 18:29:20.069092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.634 [2024-11-19 18:29:20.069099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.634 [2024-11-19 18:29:20.069105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.634 [2024-11-19 18:29:20.069121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.634 qpair failed and we were unable to recover it. 00:30:18.634 [2024-11-19 18:29:20.078990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.634 [2024-11-19 18:29:20.079042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.634 [2024-11-19 18:29:20.079056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.634 [2024-11-19 18:29:20.079063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.634 [2024-11-19 18:29:20.079070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.634 [2024-11-19 18:29:20.079085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.634 qpair failed and we were unable to recover it. 00:30:18.634 [2024-11-19 18:29:20.089069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.634 [2024-11-19 18:29:20.089116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.634 [2024-11-19 18:29:20.089130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.634 [2024-11-19 18:29:20.089141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.634 [2024-11-19 18:29:20.089148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.634 [2024-11-19 18:29:20.089167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.634 qpair failed and we were unable to recover it. 00:30:18.896 [2024-11-19 18:29:20.099040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.896 [2024-11-19 18:29:20.099090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.896 [2024-11-19 18:29:20.099104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.896 [2024-11-19 18:29:20.099111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.896 [2024-11-19 18:29:20.099118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.896 [2024-11-19 18:29:20.099133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.896 qpair failed and we were unable to recover it. 00:30:18.896 [2024-11-19 18:29:20.109129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.109190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.109204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.109211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.109218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.109233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.119136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.119192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.119205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.119212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.119219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.119233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.129173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.129223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.129236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.129244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.129251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.129266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.139154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.139210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.139223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.139231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.139237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.139253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.149234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.149285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.149298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.149306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.149312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.149327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.159204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.159270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.159283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.159291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.159297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.159312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.169254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.169328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.169342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.169350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.169356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.169371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.179258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.179308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.179322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.179329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.179336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.179350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.189320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.189380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.189393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.189401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.189407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.189422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.199349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.199396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.199409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.199416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.199423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.199437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.209258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.209315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.209328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.209336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.209342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.209357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.219280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.897 [2024-11-19 18:29:20.219325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.897 [2024-11-19 18:29:20.219338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.897 [2024-11-19 18:29:20.219353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.897 [2024-11-19 18:29:20.219359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.897 [2024-11-19 18:29:20.219374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.897 qpair failed and we were unable to recover it. 00:30:18.897 [2024-11-19 18:29:20.229450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.229505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.229519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.229526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.229533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.229547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.239464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.239533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.239545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.239553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.239559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.239574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.249499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.249554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.249567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.249575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.249581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.249596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.259379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.259430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.259443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.259450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.259457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.259475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.269521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.269574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.269588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.269595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.269602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.269616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.279537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.279588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.279601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.279608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.279615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.279629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.289486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.289551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.289565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.289573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.289580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.289594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.299595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.299648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.299662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.299669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.299675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.299695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.309696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.309800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.309814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.309822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.309828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.309843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.319670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.319716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.319730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.319737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.319744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.319758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.329686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.329735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.329748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.329756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.329762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.329777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.339704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.339791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.339804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.339811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.339817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.339832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.349788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.898 [2024-11-19 18:29:20.349843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.898 [2024-11-19 18:29:20.349860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.898 [2024-11-19 18:29:20.349867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.898 [2024-11-19 18:29:20.349874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.898 [2024-11-19 18:29:20.349889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.898 qpair failed and we were unable to recover it. 00:30:18.898 [2024-11-19 18:29:20.359778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.899 [2024-11-19 18:29:20.359831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.899 [2024-11-19 18:29:20.359856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.899 [2024-11-19 18:29:20.359865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.899 [2024-11-19 18:29:20.359872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:18.899 [2024-11-19 18:29:20.359892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:18.899 qpair failed and we were unable to recover it. 00:30:19.161 [2024-11-19 18:29:20.369828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.161 [2024-11-19 18:29:20.369888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.161 [2024-11-19 18:29:20.369913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.161 [2024-11-19 18:29:20.369922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.161 [2024-11-19 18:29:20.369929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.161 [2024-11-19 18:29:20.369949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.379693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.379746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.379761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.379769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.379776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.379791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.389865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.389932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.389945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.389953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.389964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.389980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.399775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.399824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.399838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.399845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.399852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.399867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.409950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.410001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.410014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.410021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.410028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.410043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.419810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.419863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.419888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.419897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.419903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.419924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.429958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.430015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.430040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.430049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.430057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.430077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.440016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.440073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.440089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.440097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.440107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.440124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.450036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.450090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.450104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.450112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.450118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.450134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.460026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.460077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.460090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.460097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.460104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.460119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.470091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.470138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.470151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.470162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.470169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.470184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.480088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.480138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.480155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.480167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.480173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.480189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.490146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.490199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.490212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.490219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.162 [2024-11-19 18:29:20.490225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.162 [2024-11-19 18:29:20.490240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.162 qpair failed and we were unable to recover it. 00:30:19.162 [2024-11-19 18:29:20.500139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.162 [2024-11-19 18:29:20.500222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.162 [2024-11-19 18:29:20.500236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.162 [2024-11-19 18:29:20.500243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.500250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.500265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.510202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.510261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.510274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.510281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.510287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.510302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.520272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.520322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.520335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.520343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.520353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.520368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.530303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.530354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.530367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.530375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.530381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.530396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.540254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.540302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.540314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.540322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.540328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.540343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.550318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.550374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.550387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.550395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.550402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.550417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.560221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.560276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.560291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.560298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.560305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.560321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.570373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.570421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.570434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.570442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.570449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.570464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.580362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.580408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.580421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.580429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.580436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.580451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.590405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.590453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.590467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.590474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.590481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.590495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.600432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.600487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.600500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.600507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.600514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.600529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.610448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.610504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.610520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.610527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.610534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.610549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.163 [2024-11-19 18:29:20.620430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.163 [2024-11-19 18:29:20.620477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.163 [2024-11-19 18:29:20.620490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.163 [2024-11-19 18:29:20.620497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.163 [2024-11-19 18:29:20.620504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.163 [2024-11-19 18:29:20.620519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.163 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.630667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.630729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.630743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.630750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.630756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.630771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.640620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.640677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.640691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.640698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.640704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.640719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.650625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.650700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.650713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.650724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.650731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.650746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.660507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.660557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.660570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.660577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.660583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.660599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.670650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.670703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.670717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.670724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.670730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.670745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.680645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.680743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.680756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.680764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.680770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.680784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.690692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.690742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.690755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.690763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.690769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.690784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.700659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.700711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.700724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.700731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.700738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.700753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.710706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.710766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.710780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.710787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.710794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.710809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.720653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.720702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.720715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.720722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.720729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.720744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.730791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.730839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.426 [2024-11-19 18:29:20.730852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.426 [2024-11-19 18:29:20.730859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.426 [2024-11-19 18:29:20.730866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.426 [2024-11-19 18:29:20.730880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.426 qpair failed and we were unable to recover it. 00:30:19.426 [2024-11-19 18:29:20.740772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.426 [2024-11-19 18:29:20.740837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.740861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.740870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.740879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.740899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.750847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.750952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.750977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.750986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.750993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.751014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.760897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.760985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.761010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.761019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.761026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.761047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.770888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.770937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.770952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.770959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.770966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.770982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.780847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.780891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.780904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.780917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.780923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.780939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.790961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.791014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.791027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.791035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.791041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.791056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.800960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.801009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.801022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.801030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.801036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.801051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.810879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.810926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.810941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.810948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.810955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.810974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.820979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.821035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.821049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.821057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.821064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.821083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.831060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.831113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.831126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.831134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.831140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.831155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.841083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.841137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.841151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.841162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.841169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.841184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.851071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.851115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.851128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.851135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.851142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.851157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.427 [2024-11-19 18:29:20.861132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.427 [2024-11-19 18:29:20.861178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.427 [2024-11-19 18:29:20.861192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.427 [2024-11-19 18:29:20.861200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.427 [2024-11-19 18:29:20.861206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.427 [2024-11-19 18:29:20.861221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.427 qpair failed and we were unable to recover it. 00:30:19.428 [2024-11-19 18:29:20.871135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.428 [2024-11-19 18:29:20.871200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.428 [2024-11-19 18:29:20.871215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.428 [2024-11-19 18:29:20.871222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.428 [2024-11-19 18:29:20.871229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.428 [2024-11-19 18:29:20.871244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.428 qpair failed and we were unable to recover it. 00:30:19.428 [2024-11-19 18:29:20.881194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.428 [2024-11-19 18:29:20.881247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.428 [2024-11-19 18:29:20.881260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.428 [2024-11-19 18:29:20.881268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.428 [2024-11-19 18:29:20.881275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.428 [2024-11-19 18:29:20.881290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.428 qpair failed and we were unable to recover it. 00:30:19.428 [2024-11-19 18:29:20.891179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.428 [2024-11-19 18:29:20.891231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.428 [2024-11-19 18:29:20.891244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.428 [2024-11-19 18:29:20.891252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.428 [2024-11-19 18:29:20.891259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.428 [2024-11-19 18:29:20.891274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.428 qpair failed and we were unable to recover it. 00:30:19.690 [2024-11-19 18:29:20.901220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.901268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.901282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.901289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.901296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.901311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:20.911163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.911251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.911274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.911281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.911288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.911304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:20.921303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.921350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.921364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.921372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.921379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.921394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:20.931255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.931335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.931348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.931356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.931362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.931377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:20.941197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.941244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.941258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.941266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.941272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.941287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:20.951357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.951440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.951454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.951462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.951472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.951487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:20.961415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.961465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.961478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.961486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.961493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.961508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:20.971396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.971441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.971454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.971461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.971468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.971483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:20.981407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.981454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.981467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.981474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.981481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.981496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:20.991468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:20.991534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:20.991547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:20.991555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:20.991561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:20.991576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:21.001505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:21.001563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:21.001576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:21.001583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:21.001590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:21.001604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:21.011505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:21.011555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:21.011568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.691 [2024-11-19 18:29:21.011575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.691 [2024-11-19 18:29:21.011582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.691 [2024-11-19 18:29:21.011597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.691 qpair failed and we were unable to recover it. 00:30:19.691 [2024-11-19 18:29:21.021488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.691 [2024-11-19 18:29:21.021538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.691 [2024-11-19 18:29:21.021551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.021558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.021565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.021579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.031606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.031654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.031667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.031674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.031681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.031695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.041627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.041677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.041694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.041701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.041708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.041723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.051591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.051633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.051647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.051654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.051661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.051675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.061639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.061690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.061703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.061711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.061717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.061732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.071710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.071765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.071778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.071785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.071792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.071806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.081608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.081655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.081670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.081677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.081687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.081702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.091709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.091758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.091771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.091779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.091785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.091800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.101746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.101791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.101805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.101812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.101819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.101834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.111783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.111841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.111855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.111862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.111868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.111883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.121820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.121875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.121900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.121909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.121916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.121937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.131820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.131872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.131897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.131906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.131913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.131933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.141849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.141899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.692 [2024-11-19 18:29:21.141914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.692 [2024-11-19 18:29:21.141922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.692 [2024-11-19 18:29:21.141929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.692 [2024-11-19 18:29:21.141945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.692 qpair failed and we were unable to recover it. 00:30:19.692 [2024-11-19 18:29:21.151890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.692 [2024-11-19 18:29:21.151947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.693 [2024-11-19 18:29:21.151960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.693 [2024-11-19 18:29:21.151967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.693 [2024-11-19 18:29:21.151974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.693 [2024-11-19 18:29:21.151989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.693 qpair failed and we were unable to recover it. 00:30:19.955 [2024-11-19 18:29:21.161937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.162038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.162052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.162061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.162067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.162082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.171926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.171972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.171990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.171998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.172004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.172019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.181943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.181993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.182006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.182014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.182021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.182035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.192033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.192083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.192096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.192104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.192110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.192125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.202074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.202128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.202141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.202148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.202155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.202174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.212046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.212089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.212102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.212113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.212120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.212135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.222076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.222123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.222136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.222144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.222150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.222168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.232146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.232225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.232238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.232246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.232252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.232268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.242151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.242213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.242226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.242233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.242240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.242255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.252135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.252185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.252199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.252206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.252213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.252228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.262191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.262236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.262250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.262257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.262264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.262279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.272256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.272314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.272327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.272334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.272341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.272356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.956 [2024-11-19 18:29:21.282267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.956 [2024-11-19 18:29:21.282315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.956 [2024-11-19 18:29:21.282328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.956 [2024-11-19 18:29:21.282335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.956 [2024-11-19 18:29:21.282342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.956 [2024-11-19 18:29:21.282357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.956 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.292244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.292291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.292304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.292311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.292318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.292332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.302294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.302348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.302361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.302369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.302376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.302391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.312350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.312399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.312413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.312420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.312426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.312441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.322363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.322419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.322432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.322439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.322446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.322461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.332371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.332464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.332477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.332485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.332491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.332506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.342399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.342450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.342463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.342473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.342480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.342495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.352485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.352537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.352550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.352558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.352565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.352579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.362511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.362561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.362574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.362582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.362589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.362604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.372462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.372509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.372522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.372530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.372536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.372551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.382534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.382582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.382595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.382602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.382609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.382627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.392589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.392640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.392653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.392660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.392667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.392681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.402602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.402650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.402663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.402670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.402677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.402691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:19.957 [2024-11-19 18:29:21.412592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.957 [2024-11-19 18:29:21.412657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.957 [2024-11-19 18:29:21.412671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.957 [2024-11-19 18:29:21.412679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.957 [2024-11-19 18:29:21.412686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:19.957 [2024-11-19 18:29:21.412701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.957 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.422616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.422666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.422679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.422687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.422694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.422708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.432669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.432719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.432732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.432739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.432746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.432761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.442713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.442765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.442778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.442786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.442792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.442807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.452698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.452747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.452760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.452767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.452774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.452789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.462728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.462778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.462791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.462798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.462805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.462819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.472807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.472863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.472880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.472888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.472895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.472909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.482785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.482832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.482846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.482853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.482860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.482875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.492790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.492844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.492857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.492865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.492872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.492887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.502843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.502922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.502936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.502943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.502950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.502965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.512908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.512960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.512973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.512981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.512991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.513006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.522932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.522982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.522995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.523003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.523009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.523024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.221 [2024-11-19 18:29:21.532910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.221 [2024-11-19 18:29:21.532957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.221 [2024-11-19 18:29:21.532970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.221 [2024-11-19 18:29:21.532978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.221 [2024-11-19 18:29:21.532985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.221 [2024-11-19 18:29:21.532999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.221 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.542943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.542993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.543007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.543015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.543022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.543039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.553007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.553056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.553070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.553077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.553084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.553099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.563014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.563063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.563077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.563084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.563091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.563106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.573023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.573069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.573082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.573090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.573096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.573111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.583045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.583091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.583105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.583112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.583119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.583134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.593110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.593161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.593175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.593183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.593189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.593204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.603134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.603189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.603209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.603216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.603223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.603238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.613110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.613191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.613204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.613212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.613219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.613233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.623124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.623187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.623201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.623208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.623214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.623229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.633246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.633310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.633323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.633331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.633338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.633352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.643248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.643321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.643334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.643342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.643353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.643369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.653225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.653269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.653282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.653290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.653296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.653311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.663270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.222 [2024-11-19 18:29:21.663317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.222 [2024-11-19 18:29:21.663330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.222 [2024-11-19 18:29:21.663338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.222 [2024-11-19 18:29:21.663345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.222 [2024-11-19 18:29:21.663359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.222 qpair failed and we were unable to recover it. 00:30:20.222 [2024-11-19 18:29:21.673289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.223 [2024-11-19 18:29:21.673339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.223 [2024-11-19 18:29:21.673352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.223 [2024-11-19 18:29:21.673359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.223 [2024-11-19 18:29:21.673366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.223 [2024-11-19 18:29:21.673381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.223 qpair failed and we were unable to recover it. 00:30:20.223 [2024-11-19 18:29:21.683352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.223 [2024-11-19 18:29:21.683398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.223 [2024-11-19 18:29:21.683411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.223 [2024-11-19 18:29:21.683419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.223 [2024-11-19 18:29:21.683425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.223 [2024-11-19 18:29:21.683440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.223 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.693298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.693385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.693398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.693405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.486 [2024-11-19 18:29:21.693412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.486 [2024-11-19 18:29:21.693427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.703371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.703420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.703433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.703440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.486 [2024-11-19 18:29:21.703446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.486 [2024-11-19 18:29:21.703461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.713402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.713451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.713464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.713472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.486 [2024-11-19 18:29:21.713478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.486 [2024-11-19 18:29:21.713492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.723367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.723462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.723475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.723482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.486 [2024-11-19 18:29:21.723489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.486 [2024-11-19 18:29:21.723503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.733395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.733471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.733487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.733495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.486 [2024-11-19 18:29:21.733501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.486 [2024-11-19 18:29:21.733516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.743450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.743502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.743515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.743522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.486 [2024-11-19 18:29:21.743529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.486 [2024-11-19 18:29:21.743544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.753503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.753549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.753562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.753569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.486 [2024-11-19 18:29:21.753576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.486 [2024-11-19 18:29:21.753590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.763555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.763631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.763644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.763651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.486 [2024-11-19 18:29:21.763658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.486 [2024-11-19 18:29:21.763673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.773556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.773641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.773654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.773664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.486 [2024-11-19 18:29:21.773671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.486 [2024-11-19 18:29:21.773685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-11-19 18:29:21.783574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.486 [2024-11-19 18:29:21.783624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.486 [2024-11-19 18:29:21.783637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.486 [2024-11-19 18:29:21.783645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.783651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.783666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.793653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.793704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.793718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.793725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.793732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.793747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.803541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.803587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.803601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.803609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.803616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.803631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.813650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.813693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.813707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.813714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.813721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.813739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.823680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.823776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.823790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.823797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.823804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.823819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.833769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.833849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.833862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.833870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.833877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.833891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.843780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.843839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.843852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.843859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.843866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.843881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.853774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.853818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.853831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.853839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.853845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.853860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.863770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.863821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.863835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.863842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.863849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.863863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.873890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.873949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.873974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.873983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.873990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.874010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.883923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.883974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.883998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.487 [2024-11-19 18:29:21.884007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.487 [2024-11-19 18:29:21.884014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.487 [2024-11-19 18:29:21.884034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-11-19 18:29:21.893898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.487 [2024-11-19 18:29:21.893945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.487 [2024-11-19 18:29:21.893960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.488 [2024-11-19 18:29:21.893967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.488 [2024-11-19 18:29:21.893974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.488 [2024-11-19 18:29:21.893990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.488 qpair failed and we were unable to recover it. 00:30:20.488 [2024-11-19 18:29:21.903977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.488 [2024-11-19 18:29:21.904056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.488 [2024-11-19 18:29:21.904069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.488 [2024-11-19 18:29:21.904081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.488 [2024-11-19 18:29:21.904088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.488 [2024-11-19 18:29:21.904103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.488 qpair failed and we were unable to recover it. 00:30:20.488 [2024-11-19 18:29:21.914017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.488 [2024-11-19 18:29:21.914067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.488 [2024-11-19 18:29:21.914091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.488 [2024-11-19 18:29:21.914098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.488 [2024-11-19 18:29:21.914105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.488 [2024-11-19 18:29:21.914124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.488 qpair failed and we were unable to recover it. 00:30:20.488 [2024-11-19 18:29:21.923955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.488 [2024-11-19 18:29:21.924050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.488 [2024-11-19 18:29:21.924064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.488 [2024-11-19 18:29:21.924072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.488 [2024-11-19 18:29:21.924078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.488 [2024-11-19 18:29:21.924093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.488 qpair failed and we were unable to recover it. 00:30:20.488 [2024-11-19 18:29:21.934018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.488 [2024-11-19 18:29:21.934064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.488 [2024-11-19 18:29:21.934077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.488 [2024-11-19 18:29:21.934084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.488 [2024-11-19 18:29:21.934091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.488 [2024-11-19 18:29:21.934106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.488 qpair failed and we were unable to recover it. 00:30:20.488 [2024-11-19 18:29:21.944048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.488 [2024-11-19 18:29:21.944101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.488 [2024-11-19 18:29:21.944115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.488 [2024-11-19 18:29:21.944122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.488 [2024-11-19 18:29:21.944128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.488 [2024-11-19 18:29:21.944147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.488 qpair failed and we were unable to recover it. 00:30:20.750 [2024-11-19 18:29:21.954118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.750 [2024-11-19 18:29:21.954174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.750 [2024-11-19 18:29:21.954187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.750 [2024-11-19 18:29:21.954195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:21.954201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:21.954216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:21.964138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:21.964190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:21.964204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:21.964211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:21.964218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:21.964232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:21.974003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:21.974050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:21.974064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:21.974072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:21.974078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:21.974094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:21.984120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:21.984171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:21.984185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:21.984193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:21.984200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:21.984215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:21.994218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:21.994269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:21.994283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:21.994290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:21.994296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:21.994311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:22.004239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:22.004291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:22.004304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:22.004312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:22.004318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:22.004333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:22.014225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:22.014267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:22.014281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:22.014288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:22.014294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:22.014309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:22.024239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:22.024288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:22.024301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:22.024309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:22.024315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:22.024330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:22.034234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:22.034290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:22.034307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:22.034315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:22.034322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:22.034337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:22.044311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:22.044361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:22.044374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:22.044382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:22.044388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:22.044403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:22.054323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:22.054370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:22.054383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:22.054390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:22.054397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:22.054411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:22.064378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:22.064428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:22.064441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:22.064448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:22.064455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:22.064469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:22.074432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:22.074487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.751 [2024-11-19 18:29:22.074500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.751 [2024-11-19 18:29:22.074508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.751 [2024-11-19 18:29:22.074517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.751 [2024-11-19 18:29:22.074533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.751 qpair failed and we were unable to recover it. 00:30:20.751 [2024-11-19 18:29:22.084444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.751 [2024-11-19 18:29:22.084489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.084503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.084510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.084517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.084531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.094455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.094498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.094510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.094518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.094525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.094539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.104483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.104531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.104545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.104552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.104559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.104573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.114550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.114604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.114617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.114624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.114630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.114645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.124579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.124638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.124651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.124658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.124665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.124680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.134572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.134615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.134629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.134636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.134643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.134657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.144599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.144644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.144657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.144665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.144671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.144685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.154639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.154710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.154722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.154730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.154736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.154750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.164673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.164727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.164743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.164751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.164757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.164772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.174655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.174696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.174709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.174716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.174723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.174738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.184699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.184752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.184767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.184774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.184782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.184800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.194784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.194839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.194853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.194860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.194867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.194882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.204782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.204833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.752 [2024-11-19 18:29:22.204846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.752 [2024-11-19 18:29:22.204853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.752 [2024-11-19 18:29:22.204863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.752 [2024-11-19 18:29:22.204877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.752 qpair failed and we were unable to recover it. 00:30:20.752 [2024-11-19 18:29:22.214712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.752 [2024-11-19 18:29:22.214757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.753 [2024-11-19 18:29:22.214770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.753 [2024-11-19 18:29:22.214778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.753 [2024-11-19 18:29:22.214785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:20.753 [2024-11-19 18:29:22.214799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:20.753 qpair failed and we were unable to recover it. 00:30:21.015 [2024-11-19 18:29:22.224818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.015 [2024-11-19 18:29:22.224867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.015 [2024-11-19 18:29:22.224880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.015 [2024-11-19 18:29:22.224887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.015 [2024-11-19 18:29:22.224893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.015 [2024-11-19 18:29:22.224908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.015 qpair failed and we were unable to recover it. 00:30:21.015 [2024-11-19 18:29:22.234899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.015 [2024-11-19 18:29:22.234954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.015 [2024-11-19 18:29:22.234967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.015 [2024-11-19 18:29:22.234975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.015 [2024-11-19 18:29:22.234981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.015 [2024-11-19 18:29:22.234997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.015 qpair failed and we were unable to recover it. 00:30:21.015 [2024-11-19 18:29:22.244923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.015 [2024-11-19 18:29:22.244972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.015 [2024-11-19 18:29:22.244985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.015 [2024-11-19 18:29:22.244992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.015 [2024-11-19 18:29:22.244999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.015 [2024-11-19 18:29:22.245013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.015 qpair failed and we were unable to recover it. 00:30:21.015 [2024-11-19 18:29:22.254902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.015 [2024-11-19 18:29:22.254953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.015 [2024-11-19 18:29:22.254967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.015 [2024-11-19 18:29:22.254974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.015 [2024-11-19 18:29:22.254981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.015 [2024-11-19 18:29:22.254995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.015 qpair failed and we were unable to recover it. 00:30:21.015 [2024-11-19 18:29:22.264825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.015 [2024-11-19 18:29:22.264871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.015 [2024-11-19 18:29:22.264885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.015 [2024-11-19 18:29:22.264893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.015 [2024-11-19 18:29:22.264899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.015 [2024-11-19 18:29:22.264915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.015 qpair failed and we were unable to recover it. 00:30:21.015 [2024-11-19 18:29:22.274980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.015 [2024-11-19 18:29:22.275046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.015 [2024-11-19 18:29:22.275060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.015 [2024-11-19 18:29:22.275067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.015 [2024-11-19 18:29:22.275074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.015 [2024-11-19 18:29:22.275088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.015 qpair failed and we were unable to recover it. 00:30:21.015 [2024-11-19 18:29:22.285012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.285065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.285078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.285085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.285091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.285106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.294993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.295037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.295058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.295066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.295072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.295087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.305042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.305088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.305101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.305108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.305115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.305129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.315107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.315156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.315172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.315179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.315186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.315201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.325086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.325133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.325146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.325153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.325163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.325178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.335105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.335148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.335164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.335176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.335182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.335197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.345126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.345177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.345191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.345198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.345205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.345219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.355208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.355261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.355274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.355281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.355288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.355303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.365229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.365288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.365301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.365308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.365315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.365330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.375229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.375275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.375288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.375295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.375302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.375320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.385277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.385331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.385344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.385351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.385357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.385372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.395306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.395363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.395376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.395383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.395390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.395405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.016 qpair failed and we were unable to recover it. 00:30:21.016 [2024-11-19 18:29:22.405222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.016 [2024-11-19 18:29:22.405280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.016 [2024-11-19 18:29:22.405294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.016 [2024-11-19 18:29:22.405301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.016 [2024-11-19 18:29:22.405308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.016 [2024-11-19 18:29:22.405323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.017 qpair failed and we were unable to recover it. 00:30:21.017 [2024-11-19 18:29:22.415300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.017 [2024-11-19 18:29:22.415367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.017 [2024-11-19 18:29:22.415381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.017 [2024-11-19 18:29:22.415388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.017 [2024-11-19 18:29:22.415395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.017 [2024-11-19 18:29:22.415410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.017 qpair failed and we were unable to recover it. 00:30:21.017 [2024-11-19 18:29:22.425264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.017 [2024-11-19 18:29:22.425317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.017 [2024-11-19 18:29:22.425330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.017 [2024-11-19 18:29:22.425338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.017 [2024-11-19 18:29:22.425344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.017 [2024-11-19 18:29:22.425359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.017 qpair failed and we were unable to recover it. 00:30:21.017 [2024-11-19 18:29:22.435330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.017 [2024-11-19 18:29:22.435378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.017 [2024-11-19 18:29:22.435391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.017 [2024-11-19 18:29:22.435399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.017 [2024-11-19 18:29:22.435405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.017 [2024-11-19 18:29:22.435420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.017 qpair failed and we were unable to recover it. 00:30:21.017 [2024-11-19 18:29:22.445372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.017 [2024-11-19 18:29:22.445429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.017 [2024-11-19 18:29:22.445442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.017 [2024-11-19 18:29:22.445449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.017 [2024-11-19 18:29:22.445456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.017 [2024-11-19 18:29:22.445470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.017 qpair failed and we were unable to recover it. 00:30:21.017 [2024-11-19 18:29:22.455434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.017 [2024-11-19 18:29:22.455482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.017 [2024-11-19 18:29:22.455495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.017 [2024-11-19 18:29:22.455503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.017 [2024-11-19 18:29:22.455509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.017 [2024-11-19 18:29:22.455524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.017 qpair failed and we were unable to recover it. 00:30:21.017 [2024-11-19 18:29:22.465476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.017 [2024-11-19 18:29:22.465522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.017 [2024-11-19 18:29:22.465535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.017 [2024-11-19 18:29:22.465546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.017 [2024-11-19 18:29:22.465552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.017 [2024-11-19 18:29:22.465567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.017 qpair failed and we were unable to recover it. 00:30:21.017 [2024-11-19 18:29:22.475536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.017 [2024-11-19 18:29:22.475618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.017 [2024-11-19 18:29:22.475631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.017 [2024-11-19 18:29:22.475638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.017 [2024-11-19 18:29:22.475645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.017 [2024-11-19 18:29:22.475660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.017 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-19 18:29:22.485432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.280 [2024-11-19 18:29:22.485503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.280 [2024-11-19 18:29:22.485516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.280 [2024-11-19 18:29:22.485523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.280 [2024-11-19 18:29:22.485529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.280 [2024-11-19 18:29:22.485545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-19 18:29:22.495553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.280 [2024-11-19 18:29:22.495599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.280 [2024-11-19 18:29:22.495612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.280 [2024-11-19 18:29:22.495620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.280 [2024-11-19 18:29:22.495626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.280 [2024-11-19 18:29:22.495641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-19 18:29:22.505574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.280 [2024-11-19 18:29:22.505620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.280 [2024-11-19 18:29:22.505633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.280 [2024-11-19 18:29:22.505641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.280 [2024-11-19 18:29:22.505647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.280 [2024-11-19 18:29:22.505665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.280 qpair failed and we were unable to recover it. 00:30:21.280 [2024-11-19 18:29:22.515658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.280 [2024-11-19 18:29:22.515712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.280 [2024-11-19 18:29:22.515725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.280 [2024-11-19 18:29:22.515733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.515739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.515754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.525638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.525689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.525701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.525709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.525716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.525730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.535514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.535555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.535568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.535576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.535582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.535596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.545714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.545778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.545791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.545798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.545804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.545819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.555657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.555708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.555722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.555729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.555736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.555752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.565767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.565816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.565830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.565837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.565844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.565859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.575738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.575784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.575797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.575804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.575811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.575825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.585791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.585850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.585863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.585872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.585878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.585893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.595868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.595921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.595940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.595948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.595958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.595975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.605885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.605933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.605948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.605956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.605962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.605977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.615743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.615788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.615805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.615813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.615819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.615835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.625853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.625929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.625943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.625951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.625958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.625973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.635983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.636045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.636070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.281 [2024-11-19 18:29:22.636078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.281 [2024-11-19 18:29:22.636090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.281 [2024-11-19 18:29:22.636111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.281 qpair failed and we were unable to recover it. 00:30:21.281 [2024-11-19 18:29:22.645969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.281 [2024-11-19 18:29:22.646020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.281 [2024-11-19 18:29:22.646036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.646043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.646050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.646066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.655949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.655997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.656011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.656019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.656025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.656041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.665987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.666035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.666048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.666056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.666062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.666077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.676058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.676104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.676118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.676125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.676132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.676147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.686217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.686284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.686297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.686304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.686311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.686326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.696108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.696210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.696224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.696231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.696237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.696253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.706109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.706156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.706174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.706181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.706188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.706203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.716179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.716233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.716246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.716253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.716260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.716275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.726197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.726244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.726260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.726268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.726274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.726289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.736182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.736226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.736238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.736246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.736254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.736269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.282 [2024-11-19 18:29:22.746203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.282 [2024-11-19 18:29:22.746250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.282 [2024-11-19 18:29:22.746263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.282 [2024-11-19 18:29:22.746271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.282 [2024-11-19 18:29:22.746278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.282 [2024-11-19 18:29:22.746293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.282 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.756239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.756293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.756307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.545 [2024-11-19 18:29:22.756314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.545 [2024-11-19 18:29:22.756321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.545 [2024-11-19 18:29:22.756336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.766287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.766337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.766350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.545 [2024-11-19 18:29:22.766358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.545 [2024-11-19 18:29:22.766368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.545 [2024-11-19 18:29:22.766384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.776260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.776306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.776320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.545 [2024-11-19 18:29:22.776327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.545 [2024-11-19 18:29:22.776334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.545 [2024-11-19 18:29:22.776349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.786316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.786362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.786375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.545 [2024-11-19 18:29:22.786383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.545 [2024-11-19 18:29:22.786389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.545 [2024-11-19 18:29:22.786405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.796363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.796414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.796427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.545 [2024-11-19 18:29:22.796434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.545 [2024-11-19 18:29:22.796441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.545 [2024-11-19 18:29:22.796456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.806357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.806453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.806466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.545 [2024-11-19 18:29:22.806474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.545 [2024-11-19 18:29:22.806480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.545 [2024-11-19 18:29:22.806495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.816411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.816456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.816469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.545 [2024-11-19 18:29:22.816476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.545 [2024-11-19 18:29:22.816483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.545 [2024-11-19 18:29:22.816498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.826409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.826457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.826470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.545 [2024-11-19 18:29:22.826478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.545 [2024-11-19 18:29:22.826485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.545 [2024-11-19 18:29:22.826499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.836462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.836512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.836525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.545 [2024-11-19 18:29:22.836532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.545 [2024-11-19 18:29:22.836539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.545 [2024-11-19 18:29:22.836553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.545 qpair failed and we were unable to recover it. 00:30:21.545 [2024-11-19 18:29:22.846503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.545 [2024-11-19 18:29:22.846556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.545 [2024-11-19 18:29:22.846569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.846576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.846583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.846598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.856490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.856534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.856551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.856558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.856564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.856579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.866496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.866544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.866557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.866565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.866571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.866586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.876528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.876576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.876589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.876597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.876603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.876618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.886612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.886665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.886678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.886685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.886692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.886707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.896593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.896639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.896652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.896662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.896669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.896684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.906620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.906670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.906682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.906690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.906696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.906711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.916531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.916580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.916595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.916602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.916609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.916625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.926711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.926761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.926776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.926783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.926790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.926805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.936693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.936738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.936751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.936759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.936765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.936783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.946725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.946787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.946800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.946808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.946814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.946829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.956763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.956857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.956871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.956878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.956885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.956899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.966807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.546 [2024-11-19 18:29:22.966852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.546 [2024-11-19 18:29:22.966865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.546 [2024-11-19 18:29:22.966873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.546 [2024-11-19 18:29:22.966879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.546 [2024-11-19 18:29:22.966894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.546 qpair failed and we were unable to recover it. 00:30:21.546 [2024-11-19 18:29:22.976800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.547 [2024-11-19 18:29:22.976847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.547 [2024-11-19 18:29:22.976860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.547 [2024-11-19 18:29:22.976867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.547 [2024-11-19 18:29:22.976874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.547 [2024-11-19 18:29:22.976889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.547 qpair failed and we were unable to recover it. 00:30:21.547 [2024-11-19 18:29:22.986824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.547 [2024-11-19 18:29:22.986880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.547 [2024-11-19 18:29:22.986894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.547 [2024-11-19 18:29:22.986901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.547 [2024-11-19 18:29:22.986908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.547 [2024-11-19 18:29:22.986922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.547 qpair failed and we were unable to recover it. 00:30:21.547 [2024-11-19 18:29:22.996790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.547 [2024-11-19 18:29:22.996847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.547 [2024-11-19 18:29:22.996860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.547 [2024-11-19 18:29:22.996868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.547 [2024-11-19 18:29:22.996874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.547 [2024-11-19 18:29:22.996889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.547 qpair failed and we were unable to recover it. 00:30:21.547 [2024-11-19 18:29:23.006921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.547 [2024-11-19 18:29:23.006991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.547 [2024-11-19 18:29:23.007004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.547 [2024-11-19 18:29:23.007011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.547 [2024-11-19 18:29:23.007018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.547 [2024-11-19 18:29:23.007033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.547 qpair failed and we were unable to recover it. 00:30:21.809 [2024-11-19 18:29:23.016915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.016963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.016977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.016984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.016991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.017005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.026899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.026947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.026960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.026974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.026981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.026996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.036954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.037006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.037020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.037027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.037033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.037048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.047031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.047083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.047096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.047103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.047109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.047124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.057012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.057071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.057084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.057092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.057098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.057113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.067041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.067090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.067103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.067110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.067117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.067135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.077033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.077076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.077089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.077097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.077103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.077118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.087011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.087070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.087083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.087091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.087098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.087113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.097147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.097208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.097222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.097229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.097236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.097251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.107145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.107199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.107212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.107220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.107226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.107242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.117189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.117261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.117274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.117282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.117288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.117303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.127236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.127285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.127298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.127305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.127312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.810 [2024-11-19 18:29:23.127327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.810 qpair failed and we were unable to recover it. 00:30:21.810 [2024-11-19 18:29:23.137219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.810 [2024-11-19 18:29:23.137264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.810 [2024-11-19 18:29:23.137277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.810 [2024-11-19 18:29:23.137285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.810 [2024-11-19 18:29:23.137291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.137306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.147131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.147177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.147190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.147198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.147204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.147219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.157307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.157361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.157377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.157385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.157391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.157406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.167352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.167398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.167411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.167418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.167425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.167439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.177344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.177391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.177404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.177411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.177418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.177433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.187375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.187463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.187476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.187484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.187491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.187506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.197412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.197463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.197476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.197484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.197494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.197508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.207470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.207521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.207535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.207543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.207549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.207568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.217449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.217498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.217513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.217520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.217527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.217542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.227479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.227526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.227539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.227546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.227553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.227568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.237495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.237568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.237582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.237589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.237596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.237611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.247557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.247605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.247618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.247626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.247632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.247647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.257571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.257613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.257626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.257634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.811 [2024-11-19 18:29:23.257641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.811 [2024-11-19 18:29:23.257656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.811 qpair failed and we were unable to recover it. 00:30:21.811 [2024-11-19 18:29:23.267587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.811 [2024-11-19 18:29:23.267636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.811 [2024-11-19 18:29:23.267649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.811 [2024-11-19 18:29:23.267657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.812 [2024-11-19 18:29:23.267664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:21.812 [2024-11-19 18:29:23.267679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.812 qpair failed and we were unable to recover it. 00:30:22.073 [2024-11-19 18:29:23.277598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.277646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.277659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.277666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.277673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.277688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.287652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.287700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.287715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.287723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.287730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.287745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.297666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.297715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.297728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.297735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.297742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.297756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.307703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.307751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.307764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.307772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.307778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.307793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.317750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.317800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.317813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.317820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.317827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.317842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.327715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.327788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.327801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.327809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.327820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.327836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.337778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.337826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.337839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.337847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.337853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.337868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.347808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.347854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.347871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.347879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.347885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.347901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.357849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.357923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.357936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.357943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.357950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.357965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.367866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.367916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.367930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.367937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.367944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.367958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.377899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.377949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.377963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.377970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.377977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.377992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.387907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.387962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.387987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.387996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.388003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.388023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.397941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.074 [2024-11-19 18:29:23.398046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.074 [2024-11-19 18:29:23.398071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.074 [2024-11-19 18:29:23.398080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.074 [2024-11-19 18:29:23.398087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.074 [2024-11-19 18:29:23.398108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-11-19 18:29:23.407974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.408021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.408036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.408044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.408051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.408067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.418001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.418045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.418059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.418067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.418073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.418088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.428038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.428085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.428099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.428106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.428113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.428128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.438048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.438149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.438167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.438174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.438181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.438196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.448096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.448185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.448199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.448206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.448213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.448228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.458107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.458187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.458201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.458212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.458219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.458234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.468139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.468247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.468261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.468268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.468275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.468290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.478178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.478224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.478237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.478244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.478251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.478266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.488211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.488256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.488270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.488277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.488284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.488299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.498197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.498242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.498254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.498262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.498268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.498286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.508239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.508289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.508302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.508310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.508317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.508332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.518265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.518313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.518327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.518334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.518341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.518356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-11-19 18:29:23.528296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.075 [2024-11-19 18:29:23.528345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.075 [2024-11-19 18:29:23.528358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.075 [2024-11-19 18:29:23.528366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.075 [2024-11-19 18:29:23.528372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.075 [2024-11-19 18:29:23.528387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.076 [2024-11-19 18:29:23.538330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.076 [2024-11-19 18:29:23.538383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.076 [2024-11-19 18:29:23.538396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.076 [2024-11-19 18:29:23.538403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.076 [2024-11-19 18:29:23.538412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.076 [2024-11-19 18:29:23.538426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.338 [2024-11-19 18:29:23.548349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.338 [2024-11-19 18:29:23.548413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.338 [2024-11-19 18:29:23.548427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.338 [2024-11-19 18:29:23.548434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.338 [2024-11-19 18:29:23.548441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.338 [2024-11-19 18:29:23.548455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.338 qpair failed and we were unable to recover it. 00:30:22.338 [2024-11-19 18:29:23.558396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.338 [2024-11-19 18:29:23.558485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.338 [2024-11-19 18:29:23.558499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.338 [2024-11-19 18:29:23.558507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.338 [2024-11-19 18:29:23.558514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.338 [2024-11-19 18:29:23.558528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.338 qpair failed and we were unable to recover it. 00:30:22.338 [2024-11-19 18:29:23.568397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.338 [2024-11-19 18:29:23.568448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.338 [2024-11-19 18:29:23.568461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.338 [2024-11-19 18:29:23.568468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.338 [2024-11-19 18:29:23.568475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.338 [2024-11-19 18:29:23.568489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.338 qpair failed and we were unable to recover it. 00:30:22.338 [2024-11-19 18:29:23.578407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.338 [2024-11-19 18:29:23.578452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.338 [2024-11-19 18:29:23.578466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.338 [2024-11-19 18:29:23.578473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.338 [2024-11-19 18:29:23.578480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.338 [2024-11-19 18:29:23.578494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.338 qpair failed and we were unable to recover it. 00:30:22.338 [2024-11-19 18:29:23.588453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.338 [2024-11-19 18:29:23.588505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.338 [2024-11-19 18:29:23.588518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.338 [2024-11-19 18:29:23.588529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.338 [2024-11-19 18:29:23.588536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.338 [2024-11-19 18:29:23.588550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.338 qpair failed and we were unable to recover it. 00:30:22.338 [2024-11-19 18:29:23.598498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.338 [2024-11-19 18:29:23.598549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.338 [2024-11-19 18:29:23.598563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.338 [2024-11-19 18:29:23.598570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.338 [2024-11-19 18:29:23.598577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.338 [2024-11-19 18:29:23.598592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.338 qpair failed and we were unable to recover it. 00:30:22.338 [2024-11-19 18:29:23.608516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.338 [2024-11-19 18:29:23.608557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.338 [2024-11-19 18:29:23.608571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.338 [2024-11-19 18:29:23.608578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.338 [2024-11-19 18:29:23.608585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.338 [2024-11-19 18:29:23.608599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.338 qpair failed and we were unable to recover it. 00:30:22.338 [2024-11-19 18:29:23.618540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.338 [2024-11-19 18:29:23.618588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.338 [2024-11-19 18:29:23.618602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.338 [2024-11-19 18:29:23.618609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.618616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.618630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.628443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.628502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.628514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.628522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.628529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.628547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.638586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.638636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.638649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.638656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.638662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.638677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.648609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.648654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.648667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.648675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.648681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.648696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.658637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.658688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.658701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.658709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.658716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.658730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.668659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.668709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.668722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.668730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.668737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.668751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.678674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.678721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.678735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.678742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.678749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.678763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.688723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.688767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.688780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.688787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.688794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.688809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.698637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.698681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.698697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.698704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.698711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.698726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.708776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.708822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.708835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.708843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.708849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.708864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.718688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.718739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.718756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.718763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.718770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.718784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.728760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.728802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.728815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.728822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.728829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.728844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.738818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.738859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.738872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.738880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.339 [2024-11-19 18:29:23.738886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.339 [2024-11-19 18:29:23.738901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.339 qpair failed and we were unable to recover it. 00:30:22.339 [2024-11-19 18:29:23.748859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.339 [2024-11-19 18:29:23.748912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.339 [2024-11-19 18:29:23.748936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.339 [2024-11-19 18:29:23.748945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.340 [2024-11-19 18:29:23.748953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.340 [2024-11-19 18:29:23.748973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.340 qpair failed and we were unable to recover it. 00:30:22.340 [2024-11-19 18:29:23.758920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.340 [2024-11-19 18:29:23.758977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.340 [2024-11-19 18:29:23.759001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.340 [2024-11-19 18:29:23.759010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.340 [2024-11-19 18:29:23.759026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.340 [2024-11-19 18:29:23.759046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.340 qpair failed and we were unable to recover it. 00:30:22.340 [2024-11-19 18:29:23.768905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.340 [2024-11-19 18:29:23.768950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.340 [2024-11-19 18:29:23.768965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.340 [2024-11-19 18:29:23.768973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.340 [2024-11-19 18:29:23.768979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.340 [2024-11-19 18:29:23.768995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.340 qpair failed and we were unable to recover it. 00:30:22.340 [2024-11-19 18:29:23.778828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.340 [2024-11-19 18:29:23.778877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.340 [2024-11-19 18:29:23.778891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.340 [2024-11-19 18:29:23.778898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.340 [2024-11-19 18:29:23.778905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.340 [2024-11-19 18:29:23.778920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.340 qpair failed and we were unable to recover it. 00:30:22.340 [2024-11-19 18:29:23.788993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.340 [2024-11-19 18:29:23.789042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.340 [2024-11-19 18:29:23.789056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.340 [2024-11-19 18:29:23.789063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.340 [2024-11-19 18:29:23.789070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5758000b90 00:30:22.340 [2024-11-19 18:29:23.789084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.340 qpair failed and we were unable to recover it. 00:30:22.340 [2024-11-19 18:29:23.789248] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:22.340 A controller has encountered a failure and is being reset. 00:30:22.340 Controller properly reset. 00:30:22.600 Initializing NVMe Controllers 00:30:22.600 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:22.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:22.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:22.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:22.600 Initialization complete. Launching workers. 00:30:22.600 Starting thread on core 1 00:30:22.600 Starting thread on core 2 00:30:22.600 Starting thread on core 3 00:30:22.600 Starting thread on core 0 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:22.600 00:30:22.600 real 0m11.487s 00:30:22.600 user 0m21.833s 00:30:22.600 sys 0m3.727s 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.600 ************************************ 00:30:22.600 END TEST nvmf_target_disconnect_tc2 00:30:22.600 ************************************ 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:22.600 rmmod nvme_tcp 00:30:22.600 rmmod nvme_fabrics 00:30:22.600 rmmod nvme_keyring 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2178501 ']' 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2178501 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2178501 ']' 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2178501 00:30:22.600 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:22.601 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.601 18:29:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2178501 00:30:22.601 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:22.601 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:22.601 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2178501' 00:30:22.601 killing process with pid 2178501 00:30:22.601 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2178501 00:30:22.601 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2178501 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.861 18:29:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.774 18:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:24.774 00:30:24.774 real 0m21.903s 00:30:24.774 user 0m50.083s 00:30:24.774 sys 0m9.820s 00:30:25.035 18:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.035 18:29:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:25.035 ************************************ 00:30:25.035 END TEST nvmf_target_disconnect 00:30:25.035 ************************************ 00:30:25.035 18:29:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:25.035 00:30:25.035 real 6m31.706s 00:30:25.035 user 11m21.143s 00:30:25.035 sys 2m15.188s 00:30:25.035 18:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.035 18:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.035 ************************************ 00:30:25.035 END TEST nvmf_host 00:30:25.035 ************************************ 00:30:25.035 18:29:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:25.035 18:29:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:25.035 18:29:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:25.035 18:29:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:25.035 18:29:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.035 18:29:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.035 ************************************ 00:30:25.035 START TEST nvmf_target_core_interrupt_mode 00:30:25.035 ************************************ 00:30:25.035 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:25.035 * Looking for test storage... 00:30:25.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:25.035 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:25.035 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:25.035 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:25.297 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:25.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.298 --rc genhtml_branch_coverage=1 00:30:25.298 --rc genhtml_function_coverage=1 00:30:25.298 --rc genhtml_legend=1 00:30:25.298 --rc geninfo_all_blocks=1 00:30:25.298 --rc geninfo_unexecuted_blocks=1 00:30:25.298 00:30:25.298 ' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:25.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.298 --rc genhtml_branch_coverage=1 00:30:25.298 --rc genhtml_function_coverage=1 00:30:25.298 --rc genhtml_legend=1 00:30:25.298 --rc geninfo_all_blocks=1 00:30:25.298 --rc geninfo_unexecuted_blocks=1 00:30:25.298 00:30:25.298 ' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:25.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.298 --rc genhtml_branch_coverage=1 00:30:25.298 --rc genhtml_function_coverage=1 00:30:25.298 --rc genhtml_legend=1 00:30:25.298 --rc geninfo_all_blocks=1 00:30:25.298 --rc geninfo_unexecuted_blocks=1 00:30:25.298 00:30:25.298 ' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:25.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.298 --rc genhtml_branch_coverage=1 00:30:25.298 --rc genhtml_function_coverage=1 00:30:25.298 --rc genhtml_legend=1 00:30:25.298 --rc geninfo_all_blocks=1 00:30:25.298 --rc geninfo_unexecuted_blocks=1 00:30:25.298 00:30:25.298 ' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:25.298 ************************************ 00:30:25.298 START TEST nvmf_abort 00:30:25.298 ************************************ 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:25.298 * Looking for test storage... 00:30:25.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:25.298 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:25.561 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:25.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.562 --rc genhtml_branch_coverage=1 00:30:25.562 --rc genhtml_function_coverage=1 00:30:25.562 --rc genhtml_legend=1 00:30:25.562 --rc geninfo_all_blocks=1 00:30:25.562 --rc geninfo_unexecuted_blocks=1 00:30:25.562 00:30:25.562 ' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:25.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.562 --rc genhtml_branch_coverage=1 00:30:25.562 --rc genhtml_function_coverage=1 00:30:25.562 --rc genhtml_legend=1 00:30:25.562 --rc geninfo_all_blocks=1 00:30:25.562 --rc geninfo_unexecuted_blocks=1 00:30:25.562 00:30:25.562 ' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:25.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.562 --rc genhtml_branch_coverage=1 00:30:25.562 --rc genhtml_function_coverage=1 00:30:25.562 --rc genhtml_legend=1 00:30:25.562 --rc geninfo_all_blocks=1 00:30:25.562 --rc geninfo_unexecuted_blocks=1 00:30:25.562 00:30:25.562 ' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:25.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.562 --rc genhtml_branch_coverage=1 00:30:25.562 --rc genhtml_function_coverage=1 00:30:25.562 --rc genhtml_legend=1 00:30:25.562 --rc geninfo_all_blocks=1 00:30:25.562 --rc geninfo_unexecuted_blocks=1 00:30:25.562 00:30:25.562 ' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:25.562 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:33.704 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:33.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:33.705 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:33.705 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:33.705 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.705 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.705 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.705 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.705 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:33.705 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.705 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.705 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:33.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:30:33.706 00:30:33.706 --- 10.0.0.2 ping statistics --- 00:30:33.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.706 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:30:33.706 00:30:33.706 --- 10.0.0.1 ping statistics --- 00:30:33.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.706 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2184186 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2184186 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2184186 ']' 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:33.706 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.706 [2024-11-19 18:29:34.255176] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:33.706 [2024-11-19 18:29:34.256133] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:30:33.706 [2024-11-19 18:29:34.256175] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.706 [2024-11-19 18:29:34.348412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:33.706 [2024-11-19 18:29:34.384708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.706 [2024-11-19 18:29:34.384739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.706 [2024-11-19 18:29:34.384747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.706 [2024-11-19 18:29:34.384754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.706 [2024-11-19 18:29:34.384759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.706 [2024-11-19 18:29:34.386189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:33.706 [2024-11-19 18:29:34.386344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.706 [2024-11-19 18:29:34.386345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:33.706 [2024-11-19 18:29:34.440969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:33.706 [2024-11-19 18:29:34.442032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:33.706 [2024-11-19 18:29:34.442799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:33.706 [2024-11-19 18:29:34.442908] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.706 [2024-11-19 18:29:35.087099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.706 Malloc0 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.706 Delay0 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.706 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.967 [2024-11-19 18:29:35.187030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.967 18:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:33.967 [2024-11-19 18:29:35.324931] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:36.511 Initializing NVMe Controllers 00:30:36.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:36.511 controller IO queue size 128 less than required 00:30:36.511 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:36.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:36.511 Initialization complete. Launching workers. 00:30:36.511 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28583 00:30:36.511 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28640, failed to submit 66 00:30:36.511 success 28583, unsuccessful 57, failed 0 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.511 rmmod nvme_tcp 00:30:36.511 rmmod nvme_fabrics 00:30:36.511 rmmod nvme_keyring 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2184186 ']' 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2184186 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2184186 ']' 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2184186 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2184186 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:36.511 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2184186' 00:30:36.511 killing process with pid 2184186 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2184186 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2184186 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.512 18:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.425 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.425 00:30:38.425 real 0m13.137s 00:30:38.425 user 0m10.741s 00:30:38.425 sys 0m6.766s 00:30:38.425 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.425 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 ************************************ 00:30:38.425 END TEST nvmf_abort 00:30:38.425 ************************************ 00:30:38.425 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:38.425 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.425 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.425 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.425 ************************************ 00:30:38.425 START TEST nvmf_ns_hotplug_stress 00:30:38.425 ************************************ 00:30:38.425 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:38.686 * Looking for test storage... 00:30:38.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.686 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:38.686 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:38.686 18:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:38.686 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:38.686 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.686 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.686 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:38.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.687 --rc genhtml_branch_coverage=1 00:30:38.687 --rc genhtml_function_coverage=1 00:30:38.687 --rc genhtml_legend=1 00:30:38.687 --rc geninfo_all_blocks=1 00:30:38.687 --rc geninfo_unexecuted_blocks=1 00:30:38.687 00:30:38.687 ' 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:38.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.687 --rc genhtml_branch_coverage=1 00:30:38.687 --rc genhtml_function_coverage=1 00:30:38.687 --rc genhtml_legend=1 00:30:38.687 --rc geninfo_all_blocks=1 00:30:38.687 --rc geninfo_unexecuted_blocks=1 00:30:38.687 00:30:38.687 ' 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:38.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.687 --rc genhtml_branch_coverage=1 00:30:38.687 --rc genhtml_function_coverage=1 00:30:38.687 --rc genhtml_legend=1 00:30:38.687 --rc geninfo_all_blocks=1 00:30:38.687 --rc geninfo_unexecuted_blocks=1 00:30:38.687 00:30:38.687 ' 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:38.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.687 --rc genhtml_branch_coverage=1 00:30:38.687 --rc genhtml_function_coverage=1 00:30:38.687 --rc genhtml_legend=1 00:30:38.687 --rc geninfo_all_blocks=1 00:30:38.687 --rc geninfo_unexecuted_blocks=1 00:30:38.687 00:30:38.687 ' 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.687 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.688 18:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:46.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:46.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:46.831 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:46.831 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:46.831 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:46.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:30:46.832 00:30:46.832 --- 10.0.0.2 ping statistics --- 00:30:46.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.832 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:30:46.832 00:30:46.832 --- 10.0.0.1 ping statistics --- 00:30:46.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.832 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2188881 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2188881 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2188881 ']' 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.832 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:46.832 [2024-11-19 18:29:47.471108] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:46.832 [2024-11-19 18:29:47.472068] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:30:46.832 [2024-11-19 18:29:47.472104] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.832 [2024-11-19 18:29:47.565877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:46.832 [2024-11-19 18:29:47.601777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.832 [2024-11-19 18:29:47.601808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.832 [2024-11-19 18:29:47.601816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.832 [2024-11-19 18:29:47.601823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.832 [2024-11-19 18:29:47.601829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.832 [2024-11-19 18:29:47.603218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.832 [2024-11-19 18:29:47.603369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.832 [2024-11-19 18:29:47.603370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:46.832 [2024-11-19 18:29:47.658247] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:46.832 [2024-11-19 18:29:47.659169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:46.832 [2024-11-19 18:29:47.659713] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:46.832 [2024-11-19 18:29:47.659860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:46.832 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.832 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:46.832 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:46.832 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:46.832 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:47.094 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:47.094 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:47.094 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:47.094 [2024-11-19 18:29:48.456119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.094 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:47.355 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.355 [2024-11-19 18:29:48.812951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.617 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:47.617 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:47.878 Malloc0 00:30:47.878 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:48.138 Delay0 00:30:48.139 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.139 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:48.399 NULL1 00:30:48.399 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:48.659 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2189333 00:30:48.659 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:48.659 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:48.659 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.920 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.920 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:48.920 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:49.182 true 00:30:49.182 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:49.182 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.443 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.443 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:49.443 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:49.705 true 00:30:49.705 18:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:49.705 18:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.973 18:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.286 18:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:50.286 18:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:50.286 true 00:30:50.286 18:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:50.286 18:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.619 18:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.910 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:50.910 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:50.910 true 00:30:50.910 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:50.910 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.171 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.432 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:51.432 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:51.432 true 00:30:51.432 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:51.432 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.692 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.953 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:51.953 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:51.953 true 00:30:51.953 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:51.953 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.214 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.476 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:52.476 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:52.476 true 00:30:52.737 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:52.737 18:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.737 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:52.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:53.259 true 00:30:53.259 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:53.259 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.259 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.520 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:53.520 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:53.781 true 00:30:53.781 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:53.781 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.782 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.041 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:54.041 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:54.302 true 00:30:54.302 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:54.302 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.563 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.563 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:54.563 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:54.824 true 00:30:54.824 18:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:54.824 18:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.085 18:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.085 18:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:55.085 18:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:55.345 true 00:30:55.345 18:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:55.345 18:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.605 18:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.605 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:55.605 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:55.865 true 00:30:55.865 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:55.865 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.126 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.491 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:56.491 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:56.491 true 00:30:56.491 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:56.491 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.752 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.752 18:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:56.752 18:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:57.013 true 00:30:57.013 18:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:57.013 18:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.274 18:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.274 18:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:57.274 18:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:57.535 true 00:30:57.535 18:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:57.535 18:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.796 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.057 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:58.057 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:58.057 true 00:30:58.057 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:58.057 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.317 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.579 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:58.579 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:58.579 true 00:30:58.579 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:58.579 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.840 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.101 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:59.101 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:59.101 true 00:30:59.363 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:59.363 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.363 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.625 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:59.625 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:59.886 true 00:30:59.886 18:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:30:59.886 18:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.886 18:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.147 18:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:00.147 18:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:00.408 true 00:31:00.408 18:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:00.408 18:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.408 18:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.669 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:00.669 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:00.930 true 00:31:00.930 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:00.930 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.191 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.191 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:01.191 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:01.452 true 00:31:01.452 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:01.452 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.714 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.714 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:01.714 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:01.974 true 00:31:01.974 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:01.974 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.235 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.495 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:02.495 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:02.495 true 00:31:02.495 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:02.495 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.756 18:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.018 18:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:03.018 18:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:03.018 true 00:31:03.018 18:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:03.018 18:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.279 18:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.540 18:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:03.540 18:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:03.540 true 00:31:03.801 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:03.801 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.801 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.062 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:04.062 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:04.323 true 00:31:04.323 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:04.323 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.323 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.584 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:04.584 18:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:04.844 true 00:31:04.844 18:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:04.844 18:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.105 18:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.105 18:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:05.105 18:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:05.366 true 00:31:05.366 18:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:05.366 18:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.628 18:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.628 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:05.628 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:05.888 true 00:31:05.888 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:05.888 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.149 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.149 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:06.149 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:06.410 true 00:31:06.410 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:06.410 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.670 18:30:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.670 18:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:06.670 18:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:06.930 true 00:31:06.930 18:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:06.930 18:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.190 18:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.451 18:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:07.451 18:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:07.451 true 00:31:07.451 18:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:07.451 18:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.711 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.972 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:07.972 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:07.972 true 00:31:07.972 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:07.972 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.232 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.493 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:08.493 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:08.493 true 00:31:08.493 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:08.493 18:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.754 18:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.013 18:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:09.013 18:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:09.013 true 00:31:09.013 18:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:09.013 18:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.271 18:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.530 18:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:09.530 18:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:09.530 true 00:31:09.788 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:09.788 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.788 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.046 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:10.046 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:10.304 true 00:31:10.304 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:10.304 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.304 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.563 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:10.563 18:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:10.823 true 00:31:10.823 18:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:10.823 18:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.083 18:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.083 18:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:11.083 18:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:11.343 true 00:31:11.343 18:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:11.343 18:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.602 18:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.602 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:11.602 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:11.861 true 00:31:11.861 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:11.861 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.122 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.122 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:12.381 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:12.381 true 00:31:12.381 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:12.381 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.642 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.902 18:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:12.902 18:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:12.902 true 00:31:12.902 18:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:12.902 18:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.163 18:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.423 18:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:13.423 18:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:13.423 true 00:31:13.684 18:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:13.684 18:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.684 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.943 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:13.943 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:14.201 true 00:31:14.201 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:14.201 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.201 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.461 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:14.461 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:14.721 true 00:31:14.721 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:14.721 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.980 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.980 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:14.980 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:15.239 true 00:31:15.239 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:15.239 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.499 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.499 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:15.499 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:15.757 true 00:31:15.757 18:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:15.757 18:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.017 18:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.277 18:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:16.277 18:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:16.277 true 00:31:16.277 18:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:16.277 18:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.536 18:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.795 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:16.795 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:16.795 true 00:31:16.795 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:16.795 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.055 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.315 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:17.315 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:17.315 true 00:31:17.574 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:17.574 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.575 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.834 18:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:17.834 18:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:17.834 true 00:31:18.092 18:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:18.092 18:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.092 18:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.352 18:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:18.352 18:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:18.612 true 00:31:18.612 18:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:18.612 18:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.612 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.873 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:31:18.873 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:31:18.873 Initializing NVMe Controllers 00:31:18.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.873 Controller IO queue size 128, less than required. 00:31:18.873 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:18.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:18.873 Initialization complete. Launching workers. 00:31:18.873 ======================================================== 00:31:18.873 Latency(us) 00:31:18.873 Device Information : IOPS MiB/s Average min max 00:31:18.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30383.15 14.84 4212.96 1092.77 11421.33 00:31:18.873 ======================================================== 00:31:18.873 Total : 30383.15 14.84 4212.96 1092.77 11421.33 00:31:18.873 00:31:19.132 true 00:31:19.132 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2189333 00:31:19.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2189333) - No such process 00:31:19.132 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2189333 00:31:19.133 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.393 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:19.393 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:19.393 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:19.393 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:19.393 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:19.393 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:19.654 null0 00:31:19.654 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:19.654 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:19.654 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:19.914 null1 00:31:19.914 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:19.914 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:19.914 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:19.914 null2 00:31:19.914 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:19.914 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:19.914 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:20.175 null3 00:31:20.175 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.175 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.175 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:20.435 null4 00:31:20.435 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.435 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.435 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:20.435 null5 00:31:20.435 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.435 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.435 18:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:20.696 null6 00:31:20.696 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.696 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.696 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:20.959 null7 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2196258 2196260 2196263 2196266 2196270 2196272 2196274 2196276 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:20.959 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:20.960 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:20.960 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.960 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:21.220 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:21.220 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:21.220 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.221 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:21.481 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.481 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:21.481 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:21.481 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:21.481 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:21.481 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:21.481 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:21.481 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:21.742 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.742 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.742 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:21.742 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.742 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.742 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:21.742 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.003 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.265 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:22.529 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:22.790 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:22.790 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:22.791 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.053 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:23.313 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.313 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.313 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.313 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:23.313 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:23.313 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:23.314 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:23.314 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:23.314 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.314 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:23.314 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.314 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.314 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.314 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:23.574 18:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:23.574 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:23.574 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.574 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:23.835 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:24.097 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.358 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:24.359 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:24.359 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.359 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:24.359 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.619 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.620 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.620 18:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:24.881 rmmod nvme_tcp 00:31:24.881 rmmod nvme_fabrics 00:31:24.881 rmmod nvme_keyring 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2188881 ']' 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2188881 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2188881 ']' 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2188881 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2188881 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2188881' 00:31:24.881 killing process with pid 2188881 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2188881 00:31:24.881 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2188881 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.142 18:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.058 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.058 00:31:27.058 real 0m48.581s 00:31:27.058 user 3m1.960s 00:31:27.058 sys 0m22.614s 00:31:27.058 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.058 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:27.058 ************************************ 00:31:27.058 END TEST nvmf_ns_hotplug_stress 00:31:27.058 ************************************ 00:31:27.058 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:27.058 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:27.058 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.058 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:27.319 ************************************ 00:31:27.320 START TEST nvmf_delete_subsystem 00:31:27.320 ************************************ 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:27.320 * Looking for test storage... 00:31:27.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:27.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.320 --rc genhtml_branch_coverage=1 00:31:27.320 --rc genhtml_function_coverage=1 00:31:27.320 --rc genhtml_legend=1 00:31:27.320 --rc geninfo_all_blocks=1 00:31:27.320 --rc geninfo_unexecuted_blocks=1 00:31:27.320 00:31:27.320 ' 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:27.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.320 --rc genhtml_branch_coverage=1 00:31:27.320 --rc genhtml_function_coverage=1 00:31:27.320 --rc genhtml_legend=1 00:31:27.320 --rc geninfo_all_blocks=1 00:31:27.320 --rc geninfo_unexecuted_blocks=1 00:31:27.320 00:31:27.320 ' 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:27.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.320 --rc genhtml_branch_coverage=1 00:31:27.320 --rc genhtml_function_coverage=1 00:31:27.320 --rc genhtml_legend=1 00:31:27.320 --rc geninfo_all_blocks=1 00:31:27.320 --rc geninfo_unexecuted_blocks=1 00:31:27.320 00:31:27.320 ' 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:27.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.320 --rc genhtml_branch_coverage=1 00:31:27.320 --rc genhtml_function_coverage=1 00:31:27.320 --rc genhtml_legend=1 00:31:27.320 --rc geninfo_all_blocks=1 00:31:27.320 --rc geninfo_unexecuted_blocks=1 00:31:27.320 00:31:27.320 ' 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.320 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.321 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.582 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:27.582 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:27.582 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:27.582 18:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.724 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:35.725 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:35.725 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:35.725 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:35.725 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.725 18:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:31:35.725 00:31:35.725 --- 10.0.0.2 ping statistics --- 00:31:35.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.725 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:31:35.725 00:31:35.725 --- 10.0.0.1 ping statistics --- 00:31:35.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.725 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.725 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2201301 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2201301 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2201301 ']' 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.726 18:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.726 [2024-11-19 18:30:36.372900] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:35.726 [2024-11-19 18:30:36.374027] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:31:35.726 [2024-11-19 18:30:36.374078] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.726 [2024-11-19 18:30:36.473834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:35.726 [2024-11-19 18:30:36.526412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.726 [2024-11-19 18:30:36.526465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.726 [2024-11-19 18:30:36.526473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.726 [2024-11-19 18:30:36.526481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.726 [2024-11-19 18:30:36.526487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.726 [2024-11-19 18:30:36.528076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.726 [2024-11-19 18:30:36.528082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.726 [2024-11-19 18:30:36.606367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:35.726 [2024-11-19 18:30:36.607151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:35.726 [2024-11-19 18:30:36.607401] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:35.726 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:35.726 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:35.726 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:35.726 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:35.726 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.987 [2024-11-19 18:30:37.233178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.987 [2024-11-19 18:30:37.265605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.987 NULL1 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.987 Delay0 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2201512 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:35.987 18:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:35.987 [2024-11-19 18:30:37.388886] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:37.900 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.900 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.900 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 [2024-11-19 18:30:39.429570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb562c0 is same with the state(6) to be set 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.161 starting I/O failed: -6 00:31:38.161 Write completed with error (sct=0, sc=8) 00:31:38.161 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 starting I/O failed: -6 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 starting I/O failed: -6 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 starting I/O failed: -6 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 starting I/O failed: -6 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 starting I/O failed: -6 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 starting I/O failed: -6 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 starting I/O failed: -6 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 starting I/O failed: -6 00:31:38.162 [2024-11-19 18:30:39.434045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4f2400d020 is same with the state(6) to be set 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Write completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:38.162 Read completed with error (sct=0, sc=8) 00:31:39.105 [2024-11-19 18:30:40.405960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb579a0 is same with the state(6) to be set 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 [2024-11-19 18:30:40.433063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb564a0 is same with the state(6) to be set 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 [2024-11-19 18:30:40.433500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb56860 is same with the state(6) to be set 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 [2024-11-19 18:30:40.434774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4f2400d350 is same with the state(6) to be set 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Write completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 Read completed with error (sct=0, sc=8) 00:31:39.105 [2024-11-19 18:30:40.435192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4f24000c40 is same with the state(6) to be set 00:31:39.105 Initializing NVMe Controllers 00:31:39.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.105 Controller IO queue size 128, less than required. 00:31:39.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:39.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:39.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:39.105 Initialization complete. Launching workers. 00:31:39.105 ======================================================== 00:31:39.105 Latency(us) 00:31:39.106 Device Information : IOPS MiB/s Average min max 00:31:39.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.72 0.08 911623.71 371.10 1007217.46 00:31:39.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.27 0.07 962562.38 337.91 2001192.03 00:31:39.106 ======================================================== 00:31:39.106 Total : 312.99 0.15 936242.72 337.91 2001192.03 00:31:39.106 00:31:39.106 [2024-11-19 18:30:40.435666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb579a0 (9): Bad file descriptor 00:31:39.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:39.106 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.106 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:39.106 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2201512 00:31:39.106 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2201512 00:31:39.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2201512) - No such process 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2201512 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2201512 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2201512 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.678 [2024-11-19 18:30:40.969497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2202185 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2202185 00:31:39.678 18:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:39.678 [2024-11-19 18:30:41.068573] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:40.250 18:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:40.250 18:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2202185 00:31:40.250 18:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:40.822 18:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:40.822 18:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2202185 00:31:40.822 18:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:41.083 18:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:41.083 18:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2202185 00:31:41.083 18:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:41.654 18:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:41.654 18:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2202185 00:31:41.654 18:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:42.225 18:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:42.225 18:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2202185 00:31:42.225 18:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:42.797 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:42.797 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2202185 00:31:42.797 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:42.797 Initializing NVMe Controllers 00:31:42.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.797 Controller IO queue size 128, less than required. 00:31:42.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:42.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:42.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:42.798 Initialization complete. Launching workers. 00:31:42.798 ======================================================== 00:31:42.798 Latency(us) 00:31:42.798 Device Information : IOPS MiB/s Average min max 00:31:42.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002685.97 1000225.73 1006997.24 00:31:42.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004421.92 1000429.75 1010903.65 00:31:42.798 ======================================================== 00:31:42.798 Total : 256.00 0.12 1003553.95 1000225.73 1010903.65 00:31:42.798 00:31:43.059 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:43.059 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2202185 00:31:43.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2202185) - No such process 00:31:43.059 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2202185 00:31:43.059 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:43.059 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:43.059 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:43.059 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:43.320 rmmod nvme_tcp 00:31:43.320 rmmod nvme_fabrics 00:31:43.320 rmmod nvme_keyring 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2201301 ']' 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2201301 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2201301 ']' 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2201301 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2201301 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2201301' 00:31:43.320 killing process with pid 2201301 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2201301 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2201301 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.320 18:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.867 18:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.867 00:31:45.867 real 0m18.300s 00:31:45.867 user 0m26.507s 00:31:45.867 sys 0m7.265s 00:31:45.867 18:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.867 18:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.867 ************************************ 00:31:45.867 END TEST nvmf_delete_subsystem 00:31:45.867 ************************************ 00:31:45.867 18:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:45.867 18:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:45.867 18:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.867 18:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:45.867 ************************************ 00:31:45.867 START TEST nvmf_host_management 00:31:45.867 ************************************ 00:31:45.867 18:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:45.867 * Looking for test storage... 00:31:45.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.867 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:45.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.868 --rc genhtml_branch_coverage=1 00:31:45.868 --rc genhtml_function_coverage=1 00:31:45.868 --rc genhtml_legend=1 00:31:45.868 --rc geninfo_all_blocks=1 00:31:45.868 --rc geninfo_unexecuted_blocks=1 00:31:45.868 00:31:45.868 ' 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:45.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.868 --rc genhtml_branch_coverage=1 00:31:45.868 --rc genhtml_function_coverage=1 00:31:45.868 --rc genhtml_legend=1 00:31:45.868 --rc geninfo_all_blocks=1 00:31:45.868 --rc geninfo_unexecuted_blocks=1 00:31:45.868 00:31:45.868 ' 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:45.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.868 --rc genhtml_branch_coverage=1 00:31:45.868 --rc genhtml_function_coverage=1 00:31:45.868 --rc genhtml_legend=1 00:31:45.868 --rc geninfo_all_blocks=1 00:31:45.868 --rc geninfo_unexecuted_blocks=1 00:31:45.868 00:31:45.868 ' 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:45.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.868 --rc genhtml_branch_coverage=1 00:31:45.868 --rc genhtml_function_coverage=1 00:31:45.868 --rc genhtml_legend=1 00:31:45.868 --rc geninfo_all_blocks=1 00:31:45.868 --rc geninfo_unexecuted_blocks=1 00:31:45.868 00:31:45.868 ' 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.868 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:45.869 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:54.023 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:54.024 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:54.024 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:54.024 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:54.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.024 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:54.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:31:54.025 00:31:54.025 --- 10.0.0.2 ping statistics --- 00:31:54.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.025 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:31:54.025 00:31:54.025 --- 10.0.0.1 ping statistics --- 00:31:54.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.025 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2207039 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2207039 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2207039 ']' 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.025 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:54.025 [2024-11-19 18:30:54.724833] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.025 [2024-11-19 18:30:54.725952] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:31:54.025 [2024-11-19 18:30:54.726006] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.025 [2024-11-19 18:30:54.826569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:54.025 [2024-11-19 18:30:54.880678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.025 [2024-11-19 18:30:54.880729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.025 [2024-11-19 18:30:54.880738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.025 [2024-11-19 18:30:54.880745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.025 [2024-11-19 18:30:54.880752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.025 [2024-11-19 18:30:54.883098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.025 [2024-11-19 18:30:54.883243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.025 [2024-11-19 18:30:54.883458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:54.025 [2024-11-19 18:30:54.883461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.025 [2024-11-19 18:30:54.961457] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.025 [2024-11-19 18:30:54.962329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.025 [2024-11-19 18:30:54.962569] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:54.025 [2024-11-19 18:30:54.963032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:54.025 [2024-11-19 18:30:54.963084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:54.287 [2024-11-19 18:30:55.584701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:54.287 Malloc0 00:31:54.287 [2024-11-19 18:30:55.680948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2207239 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2207239 /var/tmp/bdevperf.sock 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2207239 ']' 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:54.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:54.287 { 00:31:54.287 "params": { 00:31:54.287 "name": "Nvme$subsystem", 00:31:54.287 "trtype": "$TEST_TRANSPORT", 00:31:54.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.287 "adrfam": "ipv4", 00:31:54.287 "trsvcid": "$NVMF_PORT", 00:31:54.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.287 "hdgst": ${hdgst:-false}, 00:31:54.287 "ddgst": ${ddgst:-false} 00:31:54.287 }, 00:31:54.287 "method": "bdev_nvme_attach_controller" 00:31:54.287 } 00:31:54.287 EOF 00:31:54.287 )") 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:54.287 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:54.549 18:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:54.549 "params": { 00:31:54.549 "name": "Nvme0", 00:31:54.549 "trtype": "tcp", 00:31:54.549 "traddr": "10.0.0.2", 00:31:54.549 "adrfam": "ipv4", 00:31:54.549 "trsvcid": "4420", 00:31:54.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:54.549 "hdgst": false, 00:31:54.549 "ddgst": false 00:31:54.549 }, 00:31:54.549 "method": "bdev_nvme_attach_controller" 00:31:54.549 }' 00:31:54.549 [2024-11-19 18:30:55.789779] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:31:54.549 [2024-11-19 18:30:55.789846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207239 ] 00:31:54.549 [2024-11-19 18:30:55.882705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.549 [2024-11-19 18:30:55.937097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.809 Running I/O for 10 seconds... 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=655 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 655 -ge 100 ']' 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.385 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.385 [2024-11-19 18:30:56.688432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.385 [2024-11-19 18:30:56.688493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.385 [2024-11-19 18:30:56.688503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 [2024-11-19 18:30:56.688976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402a0 is same with the state(6) to be set 00:31:55.386 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.386 [2024-11-19 18:30:56.693591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.386 [2024-11-19 18:30:56.693650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.386 [2024-11-19 18:30:56.693674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.386 [2024-11-19 18:30:56.693683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.386 [2024-11-19 18:30:56.693694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.386 [2024-11-19 18:30:56.693702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.386 [2024-11-19 18:30:56.693712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.386 [2024-11-19 18:30:56.693721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.386 [2024-11-19 18:30:56.693732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.386 [2024-11-19 18:30:56.693740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.386 [2024-11-19 18:30:56.693750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.386 [2024-11-19 18:30:56.693757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.386 [2024-11-19 18:30:56.693767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.386 [2024-11-19 18:30:56.693784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:55.387 [2024-11-19 18:30:56.693959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.693986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.693995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.387 [2024-11-19 18:30:56.694264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.387 [2024-11-19 18:30:56.694456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.387 [2024-11-19 18:30:56.694464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.388 [2024-11-19 18:30:56.694503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.388 [2024-11-19 18:30:56.694866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.694875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101b190 is same with the state(6) to be set 00:31:55.388 [2024-11-19 18:30:56.695000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.388 [2024-11-19 18:30:56.695013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.695022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.388 [2024-11-19 18:30:56.695029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.695037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.388 [2024-11-19 18:30:56.695046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.695055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:55.388 [2024-11-19 18:30:56.695066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.695074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02000 is same with the state(6) to be set 00:31:55.388 [2024-11-19 18:30:56.696311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:55.388 task offset: 91904 on job bdev=Nvme0n1 fails 00:31:55.388 00:31:55.388 Latency(us) 00:31:55.388 [2024-11-19T17:30:56.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.388 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:55.388 Job: Nvme0n1 ended in about 0.55 seconds with error 00:31:55.388 Verification LBA range: start 0x0 length 0x400 00:31:55.388 Nvme0n1 : 0.55 1300.79 81.30 115.95 0.00 44072.50 2048.00 36700.16 00:31:55.388 [2024-11-19T17:30:56.859Z] =================================================================================================================== 00:31:55.388 [2024-11-19T17:30:56.859Z] Total : 1300.79 81.30 115.95 0.00 44072.50 2048.00 36700.16 00:31:55.388 [2024-11-19 18:30:56.698521] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:55.388 [2024-11-19 18:30:56.698559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe02000 (9): Bad file descriptor 00:31:55.388 [2024-11-19 18:30:56.700226] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:55.388 [2024-11-19 18:30:56.700407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:55.388 [2024-11-19 18:30:56.700455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:55.388 [2024-11-19 18:30:56.700476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:55.388 [2024-11-19 18:30:56.700485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:55.388 [2024-11-19 18:30:56.700495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:55.388 [2024-11-19 18:30:56.700503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe02000 00:31:55.388 [2024-11-19 18:30:56.700529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe02000 (9): Bad file descriptor 00:31:55.388 [2024-11-19 18:30:56.700544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:55.388 [2024-11-19 18:30:56.700553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:55.388 [2024-11-19 18:30:56.700565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:55.388 [2024-11-19 18:30:56.700576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:55.388 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.388 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2207239 00:31:56.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2207239) - No such process 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.333 { 00:31:56.333 "params": { 00:31:56.333 "name": "Nvme$subsystem", 00:31:56.333 "trtype": "$TEST_TRANSPORT", 00:31:56.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.333 "adrfam": "ipv4", 00:31:56.333 "trsvcid": "$NVMF_PORT", 00:31:56.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.333 "hdgst": ${hdgst:-false}, 00:31:56.333 "ddgst": ${ddgst:-false} 00:31:56.333 }, 00:31:56.333 "method": "bdev_nvme_attach_controller" 00:31:56.333 } 00:31:56.333 EOF 00:31:56.333 )") 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:56.333 18:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:56.333 "params": { 00:31:56.333 "name": "Nvme0", 00:31:56.333 "trtype": "tcp", 00:31:56.333 "traddr": "10.0.0.2", 00:31:56.333 "adrfam": "ipv4", 00:31:56.333 "trsvcid": "4420", 00:31:56.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:56.333 "hdgst": false, 00:31:56.333 "ddgst": false 00:31:56.333 }, 00:31:56.333 "method": "bdev_nvme_attach_controller" 00:31:56.333 }' 00:31:56.333 [2024-11-19 18:30:57.770762] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:31:56.333 [2024-11-19 18:30:57.770839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207590 ] 00:31:56.595 [2024-11-19 18:30:57.865268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.595 [2024-11-19 18:30:57.919868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.856 Running I/O for 1 seconds... 00:31:57.800 1959.00 IOPS, 122.44 MiB/s 00:31:57.800 Latency(us) 00:31:57.800 [2024-11-19T17:30:59.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.800 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:57.800 Verification LBA range: start 0x0 length 0x400 00:31:57.800 Nvme0n1 : 1.01 1996.68 124.79 0.00 0.00 31241.16 1508.69 33204.91 00:31:57.800 [2024-11-19T17:30:59.271Z] =================================================================================================================== 00:31:57.800 [2024-11-19T17:30:59.271Z] Total : 1996.68 124.79 0.00 0.00 31241.16 1508.69 33204.91 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.800 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.800 rmmod nvme_tcp 00:31:57.800 rmmod nvme_fabrics 00:31:58.060 rmmod nvme_keyring 00:31:58.060 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.060 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:58.060 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:58.060 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2207039 ']' 00:31:58.060 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2207039 00:31:58.060 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2207039 ']' 00:31:58.060 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2207039 00:31:58.060 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2207039 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2207039' 00:31:58.061 killing process with pid 2207039 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2207039 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2207039 00:31:58.061 [2024-11-19 18:30:59.463927] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.061 18:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:00.609 00:32:00.609 real 0m14.647s 00:32:00.609 user 0m19.052s 00:32:00.609 sys 0m7.648s 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:00.609 ************************************ 00:32:00.609 END TEST nvmf_host_management 00:32:00.609 ************************************ 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:00.609 ************************************ 00:32:00.609 START TEST nvmf_lvol 00:32:00.609 ************************************ 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:00.609 * Looking for test storage... 00:32:00.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.609 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:00.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.610 --rc genhtml_branch_coverage=1 00:32:00.610 --rc genhtml_function_coverage=1 00:32:00.610 --rc genhtml_legend=1 00:32:00.610 --rc geninfo_all_blocks=1 00:32:00.610 --rc geninfo_unexecuted_blocks=1 00:32:00.610 00:32:00.610 ' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:00.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.610 --rc genhtml_branch_coverage=1 00:32:00.610 --rc genhtml_function_coverage=1 00:32:00.610 --rc genhtml_legend=1 00:32:00.610 --rc geninfo_all_blocks=1 00:32:00.610 --rc geninfo_unexecuted_blocks=1 00:32:00.610 00:32:00.610 ' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:00.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.610 --rc genhtml_branch_coverage=1 00:32:00.610 --rc genhtml_function_coverage=1 00:32:00.610 --rc genhtml_legend=1 00:32:00.610 --rc geninfo_all_blocks=1 00:32:00.610 --rc geninfo_unexecuted_blocks=1 00:32:00.610 00:32:00.610 ' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:00.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.610 --rc genhtml_branch_coverage=1 00:32:00.610 --rc genhtml_function_coverage=1 00:32:00.610 --rc genhtml_legend=1 00:32:00.610 --rc geninfo_all_blocks=1 00:32:00.610 --rc geninfo_unexecuted_blocks=1 00:32:00.610 00:32:00.610 ' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.610 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.611 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:08.754 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:08.755 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:08.755 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:08.755 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:08.755 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:32:08.755 00:32:08.755 --- 10.0.0.2 ping statistics --- 00:32:08.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.755 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:32:08.755 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:32:08.755 00:32:08.756 --- 10.0.0.1 ping statistics --- 00:32:08.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.756 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2212129 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2212129 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2212129 ']' 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:08.756 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:08.756 [2024-11-19 18:31:09.444169] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:08.756 [2024-11-19 18:31:09.445342] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:32:08.756 [2024-11-19 18:31:09.445397] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.756 [2024-11-19 18:31:09.543334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:08.756 [2024-11-19 18:31:09.596196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.756 [2024-11-19 18:31:09.596248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.756 [2024-11-19 18:31:09.596257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.756 [2024-11-19 18:31:09.596263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.756 [2024-11-19 18:31:09.596269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.756 [2024-11-19 18:31:09.597933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.756 [2024-11-19 18:31:09.598087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.756 [2024-11-19 18:31:09.598087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:08.756 [2024-11-19 18:31:09.675282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:08.756 [2024-11-19 18:31:09.676313] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:08.756 [2024-11-19 18:31:09.676821] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:08.756 [2024-11-19 18:31:09.676952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:09.017 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.018 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:09.018 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.018 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.018 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.018 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.018 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:09.018 [2024-11-19 18:31:10.479347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.279 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:09.541 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:09.541 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:09.541 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:09.541 18:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:09.803 18:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:10.065 18:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=22089d8c-543b-42b7-b0a0-4be750d36b89 00:32:10.065 18:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 22089d8c-543b-42b7-b0a0-4be750d36b89 lvol 20 00:32:10.329 18:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9f4c4b46-4048-47c8-b0da-903d0ab32261 00:32:10.329 18:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:10.329 18:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9f4c4b46-4048-47c8-b0da-903d0ab32261 00:32:10.589 18:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:10.850 [2024-11-19 18:31:12.119230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.850 18:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:11.111 18:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2212626 00:32:11.111 18:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:11.111 18:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:12.055 18:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9f4c4b46-4048-47c8-b0da-903d0ab32261 MY_SNAPSHOT 00:32:12.317 18:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e099da8d-d06e-4aa1-8e38-e8c9c0f0e812 00:32:12.317 18:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9f4c4b46-4048-47c8-b0da-903d0ab32261 30 00:32:12.578 18:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e099da8d-d06e-4aa1-8e38-e8c9c0f0e812 MY_CLONE 00:32:12.839 18:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d8547e28-e98f-4d50-b91e-86fb2ee75309 00:32:12.839 18:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d8547e28-e98f-4d50-b91e-86fb2ee75309 00:32:13.099 18:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2212626 00:32:23.099 Initializing NVMe Controllers 00:32:23.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:23.099 Controller IO queue size 128, less than required. 00:32:23.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:23.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:23.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:23.099 Initialization complete. Launching workers. 00:32:23.099 ======================================================== 00:32:23.099 Latency(us) 00:32:23.099 Device Information : IOPS MiB/s Average min max 00:32:23.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15263.10 59.62 8389.46 1766.27 57741.38 00:32:23.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15273.40 59.66 8382.42 4073.50 73819.72 00:32:23.099 ======================================================== 00:32:23.099 Total : 30536.50 119.28 8385.94 1766.27 73819.72 00:32:23.099 00:32:23.099 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:23.099 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9f4c4b46-4048-47c8-b0da-903d0ab32261 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22089d8c-543b-42b7-b0a0-4be750d36b89 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:23.099 rmmod nvme_tcp 00:32:23.099 rmmod nvme_fabrics 00:32:23.099 rmmod nvme_keyring 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2212129 ']' 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2212129 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2212129 ']' 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2212129 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2212129 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2212129' 00:32:23.099 killing process with pid 2212129 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2212129 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2212129 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.099 18:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:24.486 00:32:24.486 real 0m23.955s 00:32:24.486 user 0m55.998s 00:32:24.486 sys 0m10.897s 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:24.486 ************************************ 00:32:24.486 END TEST nvmf_lvol 00:32:24.486 ************************************ 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:24.486 ************************************ 00:32:24.486 START TEST nvmf_lvs_grow 00:32:24.486 ************************************ 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:24.486 * Looking for test storage... 00:32:24.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:24.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.486 --rc genhtml_branch_coverage=1 00:32:24.486 --rc genhtml_function_coverage=1 00:32:24.486 --rc genhtml_legend=1 00:32:24.486 --rc geninfo_all_blocks=1 00:32:24.486 --rc geninfo_unexecuted_blocks=1 00:32:24.486 00:32:24.486 ' 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:24.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.486 --rc genhtml_branch_coverage=1 00:32:24.486 --rc genhtml_function_coverage=1 00:32:24.486 --rc genhtml_legend=1 00:32:24.486 --rc geninfo_all_blocks=1 00:32:24.486 --rc geninfo_unexecuted_blocks=1 00:32:24.486 00:32:24.486 ' 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:24.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.486 --rc genhtml_branch_coverage=1 00:32:24.486 --rc genhtml_function_coverage=1 00:32:24.486 --rc genhtml_legend=1 00:32:24.486 --rc geninfo_all_blocks=1 00:32:24.486 --rc geninfo_unexecuted_blocks=1 00:32:24.486 00:32:24.486 ' 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:24.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:24.486 --rc genhtml_branch_coverage=1 00:32:24.486 --rc genhtml_function_coverage=1 00:32:24.486 --rc genhtml_legend=1 00:32:24.486 --rc geninfo_all_blocks=1 00:32:24.486 --rc geninfo_unexecuted_blocks=1 00:32:24.486 00:32:24.486 ' 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:24.486 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:24.487 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:32.870 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:32.870 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.870 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:32.871 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:32.871 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:32.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:32:32.871 00:32:32.871 --- 10.0.0.2 ping statistics --- 00:32:32.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.871 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:32:32.871 00:32:32.871 --- 10.0.0.1 ping statistics --- 00:32:32.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.871 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2218965 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2218965 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2218965 ']' 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.871 18:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.871 [2024-11-19 18:31:33.489524] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.871 [2024-11-19 18:31:33.490657] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:32:32.871 [2024-11-19 18:31:33.490708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.871 [2024-11-19 18:31:33.590003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.872 [2024-11-19 18:31:33.641034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.872 [2024-11-19 18:31:33.641083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.872 [2024-11-19 18:31:33.641091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.872 [2024-11-19 18:31:33.641098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.872 [2024-11-19 18:31:33.641104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.872 [2024-11-19 18:31:33.641833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.872 [2024-11-19 18:31:33.717705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:32.872 [2024-11-19 18:31:33.717992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:32.872 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.872 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:32.872 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:32.872 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.872 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:33.133 [2024-11-19 18:31:34.514717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:33.133 ************************************ 00:32:33.133 START TEST lvs_grow_clean 00:32:33.133 ************************************ 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:33.133 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:33.134 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:33.134 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:33.134 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:33.394 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:33.394 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:33.394 18:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:33.655 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:33.655 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:33.655 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:33.916 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:33.916 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:33.916 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e lvol 150 00:32:34.176 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b2eea09-58e6-4775-9632-a0a732cf314a 00:32:34.176 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:34.176 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:34.176 [2024-11-19 18:31:35.578396] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:34.176 [2024-11-19 18:31:35.578577] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:34.176 true 00:32:34.176 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:34.176 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:34.436 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:34.436 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:34.697 18:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b2eea09-58e6-4775-9632-a0a732cf314a 00:32:34.697 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:34.958 [2024-11-19 18:31:36.315019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.958 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2219525 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2219525 /var/tmp/bdevperf.sock 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2219525 ']' 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:35.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.218 18:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:35.218 [2024-11-19 18:31:36.572853] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:32:35.218 [2024-11-19 18:31:36.572929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219525 ] 00:32:35.218 [2024-11-19 18:31:36.663728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.479 [2024-11-19 18:31:36.716365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.050 18:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.050 18:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:36.050 18:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:36.310 Nvme0n1 00:32:36.310 18:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:36.571 [ 00:32:36.571 { 00:32:36.571 "name": "Nvme0n1", 00:32:36.571 "aliases": [ 00:32:36.571 "3b2eea09-58e6-4775-9632-a0a732cf314a" 00:32:36.571 ], 00:32:36.571 "product_name": "NVMe disk", 00:32:36.571 "block_size": 4096, 00:32:36.571 "num_blocks": 38912, 00:32:36.571 "uuid": "3b2eea09-58e6-4775-9632-a0a732cf314a", 00:32:36.571 "numa_id": 0, 00:32:36.571 "assigned_rate_limits": { 00:32:36.571 "rw_ios_per_sec": 0, 00:32:36.571 "rw_mbytes_per_sec": 0, 00:32:36.571 "r_mbytes_per_sec": 0, 00:32:36.571 "w_mbytes_per_sec": 0 00:32:36.571 }, 00:32:36.571 "claimed": false, 00:32:36.571 "zoned": false, 00:32:36.571 "supported_io_types": { 00:32:36.571 "read": true, 00:32:36.571 "write": true, 00:32:36.571 "unmap": true, 00:32:36.571 "flush": true, 00:32:36.571 "reset": true, 00:32:36.571 "nvme_admin": true, 00:32:36.571 "nvme_io": true, 00:32:36.571 "nvme_io_md": false, 00:32:36.571 "write_zeroes": true, 00:32:36.571 "zcopy": false, 00:32:36.571 "get_zone_info": false, 00:32:36.571 "zone_management": false, 00:32:36.571 "zone_append": false, 00:32:36.571 "compare": true, 00:32:36.571 "compare_and_write": true, 00:32:36.571 "abort": true, 00:32:36.571 "seek_hole": false, 00:32:36.571 "seek_data": false, 00:32:36.571 "copy": true, 00:32:36.571 "nvme_iov_md": false 00:32:36.571 }, 00:32:36.571 "memory_domains": [ 00:32:36.571 { 00:32:36.571 "dma_device_id": "system", 00:32:36.571 "dma_device_type": 1 00:32:36.571 } 00:32:36.571 ], 00:32:36.571 "driver_specific": { 00:32:36.571 "nvme": [ 00:32:36.571 { 00:32:36.571 "trid": { 00:32:36.571 "trtype": "TCP", 00:32:36.571 "adrfam": "IPv4", 00:32:36.571 "traddr": "10.0.0.2", 00:32:36.571 "trsvcid": "4420", 00:32:36.571 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:36.571 }, 00:32:36.571 "ctrlr_data": { 00:32:36.571 "cntlid": 1, 00:32:36.571 "vendor_id": "0x8086", 00:32:36.571 "model_number": "SPDK bdev Controller", 00:32:36.571 "serial_number": "SPDK0", 00:32:36.571 "firmware_revision": "25.01", 00:32:36.571 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:36.571 "oacs": { 00:32:36.571 "security": 0, 00:32:36.571 "format": 0, 00:32:36.571 "firmware": 0, 00:32:36.571 "ns_manage": 0 00:32:36.571 }, 00:32:36.571 "multi_ctrlr": true, 00:32:36.571 "ana_reporting": false 00:32:36.571 }, 00:32:36.571 "vs": { 00:32:36.571 "nvme_version": "1.3" 00:32:36.571 }, 00:32:36.571 "ns_data": { 00:32:36.571 "id": 1, 00:32:36.571 "can_share": true 00:32:36.571 } 00:32:36.571 } 00:32:36.571 ], 00:32:36.571 "mp_policy": "active_passive" 00:32:36.571 } 00:32:36.571 } 00:32:36.571 ] 00:32:36.572 18:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2219692 00:32:36.572 18:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:36.572 18:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:36.572 Running I/O for 10 seconds... 00:32:37.513 Latency(us) 00:32:37.513 [2024-11-19T17:31:38.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.513 Nvme0n1 : 1.00 16397.00 64.05 0.00 0.00 0.00 0.00 0.00 00:32:37.513 [2024-11-19T17:31:38.984Z] =================================================================================================================== 00:32:37.513 [2024-11-19T17:31:38.984Z] Total : 16397.00 64.05 0.00 0.00 0.00 0.00 0.00 00:32:37.513 00:32:38.471 18:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:38.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.471 Nvme0n1 : 2.00 16638.50 64.99 0.00 0.00 0.00 0.00 0.00 00:32:38.471 [2024-11-19T17:31:39.942Z] =================================================================================================================== 00:32:38.471 [2024-11-19T17:31:39.942Z] Total : 16638.50 64.99 0.00 0.00 0.00 0.00 0.00 00:32:38.471 00:32:38.732 true 00:32:38.732 18:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:38.732 18:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:38.992 18:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:38.992 18:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:38.992 18:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2219692 00:32:39.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.562 Nvme0n1 : 3.00 16799.00 65.62 0.00 0.00 0.00 0.00 0.00 00:32:39.562 [2024-11-19T17:31:41.033Z] =================================================================================================================== 00:32:39.562 [2024-11-19T17:31:41.033Z] Total : 16799.00 65.62 0.00 0.00 0.00 0.00 0.00 00:32:39.562 00:32:40.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.504 Nvme0n1 : 4.00 17003.25 66.42 0.00 0.00 0.00 0.00 0.00 00:32:40.504 [2024-11-19T17:31:41.975Z] =================================================================================================================== 00:32:40.504 [2024-11-19T17:31:41.975Z] Total : 17003.25 66.42 0.00 0.00 0.00 0.00 0.00 00:32:40.504 00:32:41.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.889 Nvme0n1 : 5.00 18418.60 71.95 0.00 0.00 0.00 0.00 0.00 00:32:41.889 [2024-11-19T17:31:43.360Z] =================================================================================================================== 00:32:41.889 [2024-11-19T17:31:43.360Z] Total : 18418.60 71.95 0.00 0.00 0.00 0.00 0.00 00:32:41.889 00:32:42.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.833 Nvme0n1 : 6.00 19490.17 76.13 0.00 0.00 0.00 0.00 0.00 00:32:42.833 [2024-11-19T17:31:44.304Z] =================================================================================================================== 00:32:42.833 [2024-11-19T17:31:44.304Z] Total : 19490.17 76.13 0.00 0.00 0.00 0.00 0.00 00:32:42.833 00:32:43.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.774 Nvme0n1 : 7.00 20262.43 79.15 0.00 0.00 0.00 0.00 0.00 00:32:43.774 [2024-11-19T17:31:45.245Z] =================================================================================================================== 00:32:43.774 [2024-11-19T17:31:45.245Z] Total : 20262.43 79.15 0.00 0.00 0.00 0.00 0.00 00:32:43.774 00:32:44.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.719 Nvme0n1 : 8.00 20841.62 81.41 0.00 0.00 0.00 0.00 0.00 00:32:44.719 [2024-11-19T17:31:46.190Z] =================================================================================================================== 00:32:44.719 [2024-11-19T17:31:46.190Z] Total : 20841.62 81.41 0.00 0.00 0.00 0.00 0.00 00:32:44.719 00:32:45.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.660 Nvme0n1 : 9.00 21293.89 83.18 0.00 0.00 0.00 0.00 0.00 00:32:45.660 [2024-11-19T17:31:47.131Z] =================================================================================================================== 00:32:45.660 [2024-11-19T17:31:47.131Z] Total : 21293.89 83.18 0.00 0.00 0.00 0.00 0.00 00:32:45.660 00:32:46.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.603 Nvme0n1 : 10.00 21654.10 84.59 0.00 0.00 0.00 0.00 0.00 00:32:46.603 [2024-11-19T17:31:48.074Z] =================================================================================================================== 00:32:46.603 [2024-11-19T17:31:48.074Z] Total : 21654.10 84.59 0.00 0.00 0.00 0.00 0.00 00:32:46.603 00:32:46.603 00:32:46.603 Latency(us) 00:32:46.603 [2024-11-19T17:31:48.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.603 Nvme0n1 : 10.00 21656.17 84.59 0.00 0.00 5906.31 3959.47 23920.64 00:32:46.603 [2024-11-19T17:31:48.074Z] =================================================================================================================== 00:32:46.603 [2024-11-19T17:31:48.074Z] Total : 21656.17 84.59 0.00 0.00 5906.31 3959.47 23920.64 00:32:46.603 { 00:32:46.603 "results": [ 00:32:46.603 { 00:32:46.603 "job": "Nvme0n1", 00:32:46.603 "core_mask": "0x2", 00:32:46.603 "workload": "randwrite", 00:32:46.603 "status": "finished", 00:32:46.603 "queue_depth": 128, 00:32:46.603 "io_size": 4096, 00:32:46.603 "runtime": 10.004953, 00:32:46.603 "iops": 21656.173697167793, 00:32:46.603 "mibps": 84.59442850456169, 00:32:46.603 "io_failed": 0, 00:32:46.603 "io_timeout": 0, 00:32:46.603 "avg_latency_us": 5906.308465970366, 00:32:46.603 "min_latency_us": 3959.4666666666667, 00:32:46.603 "max_latency_us": 23920.64 00:32:46.603 } 00:32:46.603 ], 00:32:46.603 "core_count": 1 00:32:46.603 } 00:32:46.603 18:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2219525 00:32:46.603 18:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2219525 ']' 00:32:46.603 18:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2219525 00:32:46.603 18:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:46.603 18:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.603 18:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2219525 00:32:46.603 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:46.603 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:46.603 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2219525' 00:32:46.603 killing process with pid 2219525 00:32:46.603 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2219525 00:32:46.603 Received shutdown signal, test time was about 10.000000 seconds 00:32:46.603 00:32:46.603 Latency(us) 00:32:46.603 [2024-11-19T17:31:48.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.603 [2024-11-19T17:31:48.074Z] =================================================================================================================== 00:32:46.603 [2024-11-19T17:31:48.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.603 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2219525 00:32:46.864 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:46.864 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:47.124 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:47.124 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:47.384 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:47.384 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:47.384 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:47.384 [2024-11-19 18:31:48.818475] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:47.645 18:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:47.645 request: 00:32:47.645 { 00:32:47.645 "uuid": "21e03585-eba6-48ed-ae81-e5a4e4e4610e", 00:32:47.645 "method": "bdev_lvol_get_lvstores", 00:32:47.645 "req_id": 1 00:32:47.645 } 00:32:47.645 Got JSON-RPC error response 00:32:47.645 response: 00:32:47.645 { 00:32:47.645 "code": -19, 00:32:47.645 "message": "No such device" 00:32:47.645 } 00:32:47.645 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:47.645 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:47.645 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:47.645 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:47.645 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:47.912 aio_bdev 00:32:47.912 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b2eea09-58e6-4775-9632-a0a732cf314a 00:32:47.912 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3b2eea09-58e6-4775-9632-a0a732cf314a 00:32:47.912 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:47.912 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:47.912 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:47.912 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:47.912 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:48.174 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b2eea09-58e6-4775-9632-a0a732cf314a -t 2000 00:32:48.174 [ 00:32:48.174 { 00:32:48.174 "name": "3b2eea09-58e6-4775-9632-a0a732cf314a", 00:32:48.174 "aliases": [ 00:32:48.174 "lvs/lvol" 00:32:48.174 ], 00:32:48.174 "product_name": "Logical Volume", 00:32:48.174 "block_size": 4096, 00:32:48.174 "num_blocks": 38912, 00:32:48.174 "uuid": "3b2eea09-58e6-4775-9632-a0a732cf314a", 00:32:48.174 "assigned_rate_limits": { 00:32:48.174 "rw_ios_per_sec": 0, 00:32:48.174 "rw_mbytes_per_sec": 0, 00:32:48.174 "r_mbytes_per_sec": 0, 00:32:48.174 "w_mbytes_per_sec": 0 00:32:48.174 }, 00:32:48.174 "claimed": false, 00:32:48.174 "zoned": false, 00:32:48.174 "supported_io_types": { 00:32:48.174 "read": true, 00:32:48.174 "write": true, 00:32:48.174 "unmap": true, 00:32:48.174 "flush": false, 00:32:48.174 "reset": true, 00:32:48.174 "nvme_admin": false, 00:32:48.174 "nvme_io": false, 00:32:48.174 "nvme_io_md": false, 00:32:48.174 "write_zeroes": true, 00:32:48.174 "zcopy": false, 00:32:48.174 "get_zone_info": false, 00:32:48.174 "zone_management": false, 00:32:48.174 "zone_append": false, 00:32:48.174 "compare": false, 00:32:48.174 "compare_and_write": false, 00:32:48.174 "abort": false, 00:32:48.174 "seek_hole": true, 00:32:48.174 "seek_data": true, 00:32:48.174 "copy": false, 00:32:48.174 "nvme_iov_md": false 00:32:48.174 }, 00:32:48.174 "driver_specific": { 00:32:48.174 "lvol": { 00:32:48.174 "lvol_store_uuid": "21e03585-eba6-48ed-ae81-e5a4e4e4610e", 00:32:48.174 "base_bdev": "aio_bdev", 00:32:48.174 "thin_provision": false, 00:32:48.174 "num_allocated_clusters": 38, 00:32:48.174 "snapshot": false, 00:32:48.174 "clone": false, 00:32:48.174 "esnap_clone": false 00:32:48.174 } 00:32:48.174 } 00:32:48.174 } 00:32:48.174 ] 00:32:48.174 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:48.174 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:48.174 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:48.435 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:48.435 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:48.435 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:48.696 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:48.696 18:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b2eea09-58e6-4775-9632-a0a732cf314a 00:32:48.696 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21e03585-eba6-48ed-ae81-e5a4e4e4610e 00:32:48.956 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:49.218 00:32:49.218 real 0m15.944s 00:32:49.218 user 0m15.005s 00:32:49.218 sys 0m1.813s 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:49.218 ************************************ 00:32:49.218 END TEST lvs_grow_clean 00:32:49.218 ************************************ 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:49.218 ************************************ 00:32:49.218 START TEST lvs_grow_dirty 00:32:49.218 ************************************ 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:49.218 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:49.480 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:49.480 18:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:49.740 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cfe2cd63-3813-4011-9f75-215b64f2c010 00:32:49.740 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:32:49.740 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:50.001 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:50.001 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:50.001 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cfe2cd63-3813-4011-9f75-215b64f2c010 lvol 150 00:32:50.001 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6c547c6a-c530-43ce-bf58-70835e70bf99 00:32:50.001 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:50.001 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:50.262 [2024-11-19 18:31:51.614389] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:50.262 [2024-11-19 18:31:51.614554] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:50.262 true 00:32:50.262 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:32:50.262 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:50.522 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:50.522 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:50.522 18:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6c547c6a-c530-43ce-bf58-70835e70bf99 00:32:50.782 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:51.043 [2024-11-19 18:31:52.262828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2222431 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2222431 /var/tmp/bdevperf.sock 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2222431 ']' 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:51.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.043 18:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:51.043 [2024-11-19 18:31:52.479209] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:32:51.043 [2024-11-19 18:31:52.479267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222431 ] 00:32:51.303 [2024-11-19 18:31:52.566411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.303 [2024-11-19 18:31:52.597929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.875 18:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.875 18:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:51.875 18:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:52.135 Nvme0n1 00:32:52.135 18:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:52.395 [ 00:32:52.395 { 00:32:52.395 "name": "Nvme0n1", 00:32:52.395 "aliases": [ 00:32:52.395 "6c547c6a-c530-43ce-bf58-70835e70bf99" 00:32:52.395 ], 00:32:52.395 "product_name": "NVMe disk", 00:32:52.395 "block_size": 4096, 00:32:52.395 "num_blocks": 38912, 00:32:52.395 "uuid": "6c547c6a-c530-43ce-bf58-70835e70bf99", 00:32:52.395 "numa_id": 0, 00:32:52.395 "assigned_rate_limits": { 00:32:52.395 "rw_ios_per_sec": 0, 00:32:52.395 "rw_mbytes_per_sec": 0, 00:32:52.395 "r_mbytes_per_sec": 0, 00:32:52.395 "w_mbytes_per_sec": 0 00:32:52.395 }, 00:32:52.395 "claimed": false, 00:32:52.395 "zoned": false, 00:32:52.395 "supported_io_types": { 00:32:52.395 "read": true, 00:32:52.395 "write": true, 00:32:52.395 "unmap": true, 00:32:52.395 "flush": true, 00:32:52.395 "reset": true, 00:32:52.395 "nvme_admin": true, 00:32:52.395 "nvme_io": true, 00:32:52.395 "nvme_io_md": false, 00:32:52.395 "write_zeroes": true, 00:32:52.395 "zcopy": false, 00:32:52.395 "get_zone_info": false, 00:32:52.395 "zone_management": false, 00:32:52.395 "zone_append": false, 00:32:52.395 "compare": true, 00:32:52.395 "compare_and_write": true, 00:32:52.395 "abort": true, 00:32:52.395 "seek_hole": false, 00:32:52.395 "seek_data": false, 00:32:52.395 "copy": true, 00:32:52.395 "nvme_iov_md": false 00:32:52.395 }, 00:32:52.395 "memory_domains": [ 00:32:52.395 { 00:32:52.395 "dma_device_id": "system", 00:32:52.395 "dma_device_type": 1 00:32:52.395 } 00:32:52.395 ], 00:32:52.395 "driver_specific": { 00:32:52.395 "nvme": [ 00:32:52.395 { 00:32:52.395 "trid": { 00:32:52.395 "trtype": "TCP", 00:32:52.395 "adrfam": "IPv4", 00:32:52.395 "traddr": "10.0.0.2", 00:32:52.395 "trsvcid": "4420", 00:32:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:52.395 }, 00:32:52.395 "ctrlr_data": { 00:32:52.395 "cntlid": 1, 00:32:52.395 "vendor_id": "0x8086", 00:32:52.395 "model_number": "SPDK bdev Controller", 00:32:52.395 "serial_number": "SPDK0", 00:32:52.395 "firmware_revision": "25.01", 00:32:52.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:52.395 "oacs": { 00:32:52.395 "security": 0, 00:32:52.395 "format": 0, 00:32:52.395 "firmware": 0, 00:32:52.395 "ns_manage": 0 00:32:52.395 }, 00:32:52.395 "multi_ctrlr": true, 00:32:52.395 "ana_reporting": false 00:32:52.395 }, 00:32:52.395 "vs": { 00:32:52.395 "nvme_version": "1.3" 00:32:52.395 }, 00:32:52.395 "ns_data": { 00:32:52.395 "id": 1, 00:32:52.395 "can_share": true 00:32:52.395 } 00:32:52.395 } 00:32:52.395 ], 00:32:52.395 "mp_policy": "active_passive" 00:32:52.395 } 00:32:52.395 } 00:32:52.395 ] 00:32:52.395 18:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2222759 00:32:52.395 18:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:52.395 18:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:52.395 Running I/O for 10 seconds... 00:32:53.778 Latency(us) 00:32:53.778 [2024-11-19T17:31:55.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.778 Nvme0n1 : 1.00 17514.00 68.41 0.00 0.00 0.00 0.00 0.00 00:32:53.778 [2024-11-19T17:31:55.249Z] =================================================================================================================== 00:32:53.778 [2024-11-19T17:31:55.249Z] Total : 17514.00 68.41 0.00 0.00 0.00 0.00 0.00 00:32:53.778 00:32:54.348 18:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:32:54.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:54.348 Nvme0n1 : 2.00 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:32:54.348 [2024-11-19T17:31:55.819Z] =================================================================================================================== 00:32:54.348 [2024-11-19T17:31:55.819Z] Total : 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:32:54.348 00:32:54.609 true 00:32:54.609 18:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:32:54.609 18:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:54.609 18:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:54.609 18:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:54.609 18:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2222759 00:32:55.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.550 Nvme0n1 : 3.00 17822.33 69.62 0.00 0.00 0.00 0.00 0.00 00:32:55.550 [2024-11-19T17:31:57.021Z] =================================================================================================================== 00:32:55.550 [2024-11-19T17:31:57.021Z] Total : 17822.33 69.62 0.00 0.00 0.00 0.00 0.00 00:32:55.550 00:32:56.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.492 Nvme0n1 : 4.00 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:32:56.492 [2024-11-19T17:31:57.963Z] =================================================================================================================== 00:32:56.492 [2024-11-19T17:31:57.963Z] Total : 17875.25 69.83 0.00 0.00 0.00 0.00 0.00 00:32:56.492 00:32:57.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.434 Nvme0n1 : 5.00 18748.60 73.24 0.00 0.00 0.00 0.00 0.00 00:32:57.434 [2024-11-19T17:31:58.905Z] =================================================================================================================== 00:32:57.434 [2024-11-19T17:31:58.905Z] Total : 18748.60 73.24 0.00 0.00 0.00 0.00 0.00 00:32:57.434 00:32:58.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:58.374 Nvme0n1 : 6.00 19878.33 77.65 0.00 0.00 0.00 0.00 0.00 00:32:58.374 [2024-11-19T17:31:59.845Z] =================================================================================================================== 00:32:58.374 [2024-11-19T17:31:59.845Z] Total : 19878.33 77.65 0.00 0.00 0.00 0.00 0.00 00:32:58.374 00:32:59.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.758 Nvme0n1 : 7.00 20667.14 80.73 0.00 0.00 0.00 0.00 0.00 00:32:59.758 [2024-11-19T17:32:01.229Z] =================================================================================================================== 00:32:59.758 [2024-11-19T17:32:01.229Z] Total : 20667.14 80.73 0.00 0.00 0.00 0.00 0.00 00:32:59.758 00:33:00.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.706 Nvme0n1 : 8.00 21274.62 83.10 0.00 0.00 0.00 0.00 0.00 00:33:00.706 [2024-11-19T17:32:02.177Z] =================================================================================================================== 00:33:00.706 [2024-11-19T17:32:02.177Z] Total : 21274.62 83.10 0.00 0.00 0.00 0.00 0.00 00:33:00.706 00:33:01.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:01.377 Nvme0n1 : 9.00 21747.11 84.95 0.00 0.00 0.00 0.00 0.00 00:33:01.377 [2024-11-19T17:32:02.848Z] =================================================================================================================== 00:33:01.377 [2024-11-19T17:32:02.848Z] Total : 21747.11 84.95 0.00 0.00 0.00 0.00 0.00 00:33:01.377 00:33:02.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.762 Nvme0n1 : 10.00 22125.10 86.43 0.00 0.00 0.00 0.00 0.00 00:33:02.763 [2024-11-19T17:32:04.234Z] =================================================================================================================== 00:33:02.763 [2024-11-19T17:32:04.234Z] Total : 22125.10 86.43 0.00 0.00 0.00 0.00 0.00 00:33:02.763 00:33:02.763 00:33:02.763 Latency(us) 00:33:02.763 [2024-11-19T17:32:04.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.763 Nvme0n1 : 10.00 22127.80 86.44 0.00 0.00 5781.93 2990.08 30801.92 00:33:02.763 [2024-11-19T17:32:04.234Z] =================================================================================================================== 00:33:02.763 [2024-11-19T17:32:04.234Z] Total : 22127.80 86.44 0.00 0.00 5781.93 2990.08 30801.92 00:33:02.763 { 00:33:02.763 "results": [ 00:33:02.763 { 00:33:02.763 "job": "Nvme0n1", 00:33:02.763 "core_mask": "0x2", 00:33:02.763 "workload": "randwrite", 00:33:02.763 "status": "finished", 00:33:02.763 "queue_depth": 128, 00:33:02.763 "io_size": 4096, 00:33:02.763 "runtime": 10.004566, 00:33:02.763 "iops": 22127.79644814178, 00:33:02.763 "mibps": 86.43670487555383, 00:33:02.763 "io_failed": 0, 00:33:02.763 "io_timeout": 0, 00:33:02.763 "avg_latency_us": 5781.932230488589, 00:33:02.763 "min_latency_us": 2990.08, 00:33:02.763 "max_latency_us": 30801.92 00:33:02.763 } 00:33:02.763 ], 00:33:02.763 "core_count": 1 00:33:02.763 } 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2222431 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2222431 ']' 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2222431 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2222431 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2222431' 00:33:02.763 killing process with pid 2222431 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2222431 00:33:02.763 Received shutdown signal, test time was about 10.000000 seconds 00:33:02.763 00:33:02.763 Latency(us) 00:33:02.763 [2024-11-19T17:32:04.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.763 [2024-11-19T17:32:04.234Z] =================================================================================================================== 00:33:02.763 [2024-11-19T17:32:04.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:02.763 18:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2222431 00:33:02.763 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:02.763 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:03.024 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:33:03.024 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2218965 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2218965 00:33:03.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2218965 Killed "${NVMF_APP[@]}" "$@" 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2224782 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2224782 00:33:03.285 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:03.286 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2224782 ']' 00:33:03.286 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.286 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.286 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.286 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.286 18:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:03.286 [2024-11-19 18:32:04.632777] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:03.286 [2024-11-19 18:32:04.633803] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:33:03.286 [2024-11-19 18:32:04.633847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.286 [2024-11-19 18:32:04.724259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.546 [2024-11-19 18:32:04.755649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.546 [2024-11-19 18:32:04.755676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.546 [2024-11-19 18:32:04.755682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.546 [2024-11-19 18:32:04.755687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.546 [2024-11-19 18:32:04.755691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.546 [2024-11-19 18:32:04.756133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.546 [2024-11-19 18:32:04.806694] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:03.546 [2024-11-19 18:32:04.806885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:04.119 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.119 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:04.119 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:04.119 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:04.119 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:04.119 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.119 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:04.380 [2024-11-19 18:32:05.646567] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:04.380 [2024-11-19 18:32:05.646834] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:04.380 [2024-11-19 18:32:05.646929] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:04.380 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:04.380 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6c547c6a-c530-43ce-bf58-70835e70bf99 00:33:04.380 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6c547c6a-c530-43ce-bf58-70835e70bf99 00:33:04.380 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:04.380 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:04.380 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:04.380 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:04.380 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:04.642 18:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6c547c6a-c530-43ce-bf58-70835e70bf99 -t 2000 00:33:04.642 [ 00:33:04.642 { 00:33:04.642 "name": "6c547c6a-c530-43ce-bf58-70835e70bf99", 00:33:04.642 "aliases": [ 00:33:04.642 "lvs/lvol" 00:33:04.642 ], 00:33:04.642 "product_name": "Logical Volume", 00:33:04.642 "block_size": 4096, 00:33:04.642 "num_blocks": 38912, 00:33:04.642 "uuid": "6c547c6a-c530-43ce-bf58-70835e70bf99", 00:33:04.642 "assigned_rate_limits": { 00:33:04.642 "rw_ios_per_sec": 0, 00:33:04.642 "rw_mbytes_per_sec": 0, 00:33:04.642 "r_mbytes_per_sec": 0, 00:33:04.642 "w_mbytes_per_sec": 0 00:33:04.642 }, 00:33:04.642 "claimed": false, 00:33:04.642 "zoned": false, 00:33:04.642 "supported_io_types": { 00:33:04.642 "read": true, 00:33:04.642 "write": true, 00:33:04.642 "unmap": true, 00:33:04.642 "flush": false, 00:33:04.642 "reset": true, 00:33:04.642 "nvme_admin": false, 00:33:04.642 "nvme_io": false, 00:33:04.642 "nvme_io_md": false, 00:33:04.642 "write_zeroes": true, 00:33:04.642 "zcopy": false, 00:33:04.642 "get_zone_info": false, 00:33:04.642 "zone_management": false, 00:33:04.642 "zone_append": false, 00:33:04.642 "compare": false, 00:33:04.642 "compare_and_write": false, 00:33:04.642 "abort": false, 00:33:04.642 "seek_hole": true, 00:33:04.642 "seek_data": true, 00:33:04.642 "copy": false, 00:33:04.642 "nvme_iov_md": false 00:33:04.642 }, 00:33:04.642 "driver_specific": { 00:33:04.642 "lvol": { 00:33:04.642 "lvol_store_uuid": "cfe2cd63-3813-4011-9f75-215b64f2c010", 00:33:04.642 "base_bdev": "aio_bdev", 00:33:04.642 "thin_provision": false, 00:33:04.642 "num_allocated_clusters": 38, 00:33:04.642 "snapshot": false, 00:33:04.642 "clone": false, 00:33:04.642 "esnap_clone": false 00:33:04.642 } 00:33:04.642 } 00:33:04.642 } 00:33:04.642 ] 00:33:04.642 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:04.642 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:33:04.642 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:04.904 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:04.904 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:33:04.904 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:05.164 [2024-11-19 18:32:06.564640] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.164 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.165 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.165 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:05.165 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:33:05.425 request: 00:33:05.425 { 00:33:05.425 "uuid": "cfe2cd63-3813-4011-9f75-215b64f2c010", 00:33:05.425 "method": "bdev_lvol_get_lvstores", 00:33:05.425 "req_id": 1 00:33:05.425 } 00:33:05.425 Got JSON-RPC error response 00:33:05.425 response: 00:33:05.425 { 00:33:05.425 "code": -19, 00:33:05.425 "message": "No such device" 00:33:05.425 } 00:33:05.425 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:05.425 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:05.425 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:05.425 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:05.425 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:05.686 aio_bdev 00:33:05.686 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6c547c6a-c530-43ce-bf58-70835e70bf99 00:33:05.686 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6c547c6a-c530-43ce-bf58-70835e70bf99 00:33:05.686 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:05.686 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:05.686 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:05.686 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:05.686 18:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:05.686 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6c547c6a-c530-43ce-bf58-70835e70bf99 -t 2000 00:33:05.948 [ 00:33:05.948 { 00:33:05.948 "name": "6c547c6a-c530-43ce-bf58-70835e70bf99", 00:33:05.948 "aliases": [ 00:33:05.948 "lvs/lvol" 00:33:05.948 ], 00:33:05.948 "product_name": "Logical Volume", 00:33:05.948 "block_size": 4096, 00:33:05.948 "num_blocks": 38912, 00:33:05.948 "uuid": "6c547c6a-c530-43ce-bf58-70835e70bf99", 00:33:05.948 "assigned_rate_limits": { 00:33:05.948 "rw_ios_per_sec": 0, 00:33:05.948 "rw_mbytes_per_sec": 0, 00:33:05.948 "r_mbytes_per_sec": 0, 00:33:05.948 "w_mbytes_per_sec": 0 00:33:05.948 }, 00:33:05.948 "claimed": false, 00:33:05.948 "zoned": false, 00:33:05.948 "supported_io_types": { 00:33:05.948 "read": true, 00:33:05.948 "write": true, 00:33:05.948 "unmap": true, 00:33:05.948 "flush": false, 00:33:05.948 "reset": true, 00:33:05.948 "nvme_admin": false, 00:33:05.948 "nvme_io": false, 00:33:05.948 "nvme_io_md": false, 00:33:05.948 "write_zeroes": true, 00:33:05.948 "zcopy": false, 00:33:05.948 "get_zone_info": false, 00:33:05.948 "zone_management": false, 00:33:05.948 "zone_append": false, 00:33:05.948 "compare": false, 00:33:05.948 "compare_and_write": false, 00:33:05.948 "abort": false, 00:33:05.948 "seek_hole": true, 00:33:05.948 "seek_data": true, 00:33:05.948 "copy": false, 00:33:05.948 "nvme_iov_md": false 00:33:05.948 }, 00:33:05.948 "driver_specific": { 00:33:05.948 "lvol": { 00:33:05.948 "lvol_store_uuid": "cfe2cd63-3813-4011-9f75-215b64f2c010", 00:33:05.948 "base_bdev": "aio_bdev", 00:33:05.948 "thin_provision": false, 00:33:05.948 "num_allocated_clusters": 38, 00:33:05.948 "snapshot": false, 00:33:05.948 "clone": false, 00:33:05.948 "esnap_clone": false 00:33:05.948 } 00:33:05.948 } 00:33:05.948 } 00:33:05.948 ] 00:33:05.948 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:05.948 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:05.948 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:33:06.210 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:06.210 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:33:06.210 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:06.470 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:06.470 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6c547c6a-c530-43ce-bf58-70835e70bf99 00:33:06.470 18:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cfe2cd63-3813-4011-9f75-215b64f2c010 00:33:06.730 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:06.990 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:06.991 00:33:06.991 real 0m17.693s 00:33:06.991 user 0m35.401s 00:33:06.991 sys 0m3.257s 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:06.991 ************************************ 00:33:06.991 END TEST lvs_grow_dirty 00:33:06.991 ************************************ 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:06.991 nvmf_trace.0 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:06.991 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:06.991 rmmod nvme_tcp 00:33:06.991 rmmod nvme_fabrics 00:33:07.250 rmmod nvme_keyring 00:33:07.250 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.250 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:07.250 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:07.250 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2224782 ']' 00:33:07.250 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2224782 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2224782 ']' 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2224782 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2224782 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2224782' 00:33:07.251 killing process with pid 2224782 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2224782 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2224782 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.251 18:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.797 00:33:09.797 real 0m45.068s 00:33:09.797 user 0m53.424s 00:33:09.797 sys 0m11.211s 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:09.797 ************************************ 00:33:09.797 END TEST nvmf_lvs_grow 00:33:09.797 ************************************ 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:09.797 ************************************ 00:33:09.797 START TEST nvmf_bdev_io_wait 00:33:09.797 ************************************ 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:09.797 * Looking for test storage... 00:33:09.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.797 18:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:09.797 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.798 --rc genhtml_branch_coverage=1 00:33:09.798 --rc genhtml_function_coverage=1 00:33:09.798 --rc genhtml_legend=1 00:33:09.798 --rc geninfo_all_blocks=1 00:33:09.798 --rc geninfo_unexecuted_blocks=1 00:33:09.798 00:33:09.798 ' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.798 --rc genhtml_branch_coverage=1 00:33:09.798 --rc genhtml_function_coverage=1 00:33:09.798 --rc genhtml_legend=1 00:33:09.798 --rc geninfo_all_blocks=1 00:33:09.798 --rc geninfo_unexecuted_blocks=1 00:33:09.798 00:33:09.798 ' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.798 --rc genhtml_branch_coverage=1 00:33:09.798 --rc genhtml_function_coverage=1 00:33:09.798 --rc genhtml_legend=1 00:33:09.798 --rc geninfo_all_blocks=1 00:33:09.798 --rc geninfo_unexecuted_blocks=1 00:33:09.798 00:33:09.798 ' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.798 --rc genhtml_branch_coverage=1 00:33:09.798 --rc genhtml_function_coverage=1 00:33:09.798 --rc genhtml_legend=1 00:33:09.798 --rc geninfo_all_blocks=1 00:33:09.798 --rc geninfo_unexecuted_blocks=1 00:33:09.798 00:33:09.798 ' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.798 18:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:17.946 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:17.947 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:17.947 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:17.947 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:17.947 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:17.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:17.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:33:17.947 00:33:17.947 --- 10.0.0.2 ping statistics --- 00:33:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.947 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:17.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:17.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:33:17.947 00:33:17.947 --- 10.0.0.1 ping statistics --- 00:33:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:17.947 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:17.947 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2229792 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2229792 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2229792 ']' 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.948 18:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:17.948 [2024-11-19 18:32:18.707422] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:17.948 [2024-11-19 18:32:18.708561] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:33:17.948 [2024-11-19 18:32:18.708615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:17.948 [2024-11-19 18:32:18.808246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:17.948 [2024-11-19 18:32:18.862529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:17.948 [2024-11-19 18:32:18.862580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:17.948 [2024-11-19 18:32:18.862589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:17.948 [2024-11-19 18:32:18.862596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:17.948 [2024-11-19 18:32:18.862602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:17.948 [2024-11-19 18:32:18.864861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.948 [2024-11-19 18:32:18.865020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:17.948 [2024-11-19 18:32:18.865147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:17.948 [2024-11-19 18:32:18.865148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.948 [2024-11-19 18:32:18.865774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.210 [2024-11-19 18:32:19.633436] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:18.210 [2024-11-19 18:32:19.634090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:18.210 [2024-11-19 18:32:19.634145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:18.210 [2024-11-19 18:32:19.634314] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.210 [2024-11-19 18:32:19.645998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:18.210 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.211 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.473 Malloc0 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:18.473 [2024-11-19 18:32:19.722609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2229867 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2229869 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:18.473 { 00:33:18.473 "params": { 00:33:18.473 "name": "Nvme$subsystem", 00:33:18.473 "trtype": "$TEST_TRANSPORT", 00:33:18.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.473 "adrfam": "ipv4", 00:33:18.473 "trsvcid": "$NVMF_PORT", 00:33:18.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.473 "hdgst": ${hdgst:-false}, 00:33:18.473 "ddgst": ${ddgst:-false} 00:33:18.473 }, 00:33:18.473 "method": "bdev_nvme_attach_controller" 00:33:18.473 } 00:33:18.473 EOF 00:33:18.473 )") 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2229872 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:18.473 { 00:33:18.473 "params": { 00:33:18.473 "name": "Nvme$subsystem", 00:33:18.473 "trtype": "$TEST_TRANSPORT", 00:33:18.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.473 "adrfam": "ipv4", 00:33:18.473 "trsvcid": "$NVMF_PORT", 00:33:18.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.473 "hdgst": ${hdgst:-false}, 00:33:18.473 "ddgst": ${ddgst:-false} 00:33:18.473 }, 00:33:18.473 "method": "bdev_nvme_attach_controller" 00:33:18.473 } 00:33:18.473 EOF 00:33:18.473 )") 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2229875 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:18.473 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:18.474 { 00:33:18.474 "params": { 00:33:18.474 "name": "Nvme$subsystem", 00:33:18.474 "trtype": "$TEST_TRANSPORT", 00:33:18.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.474 "adrfam": "ipv4", 00:33:18.474 "trsvcid": "$NVMF_PORT", 00:33:18.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.474 "hdgst": ${hdgst:-false}, 00:33:18.474 "ddgst": ${ddgst:-false} 00:33:18.474 }, 00:33:18.474 "method": "bdev_nvme_attach_controller" 00:33:18.474 } 00:33:18.474 EOF 00:33:18.474 )") 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:18.474 { 00:33:18.474 "params": { 00:33:18.474 "name": "Nvme$subsystem", 00:33:18.474 "trtype": "$TEST_TRANSPORT", 00:33:18.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.474 "adrfam": "ipv4", 00:33:18.474 "trsvcid": "$NVMF_PORT", 00:33:18.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.474 "hdgst": ${hdgst:-false}, 00:33:18.474 "ddgst": ${ddgst:-false} 00:33:18.474 }, 00:33:18.474 "method": "bdev_nvme_attach_controller" 00:33:18.474 } 00:33:18.474 EOF 00:33:18.474 )") 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2229867 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:18.474 "params": { 00:33:18.474 "name": "Nvme1", 00:33:18.474 "trtype": "tcp", 00:33:18.474 "traddr": "10.0.0.2", 00:33:18.474 "adrfam": "ipv4", 00:33:18.474 "trsvcid": "4420", 00:33:18.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.474 "hdgst": false, 00:33:18.474 "ddgst": false 00:33:18.474 }, 00:33:18.474 "method": "bdev_nvme_attach_controller" 00:33:18.474 }' 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:18.474 "params": { 00:33:18.474 "name": "Nvme1", 00:33:18.474 "trtype": "tcp", 00:33:18.474 "traddr": "10.0.0.2", 00:33:18.474 "adrfam": "ipv4", 00:33:18.474 "trsvcid": "4420", 00:33:18.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.474 "hdgst": false, 00:33:18.474 "ddgst": false 00:33:18.474 }, 00:33:18.474 "method": "bdev_nvme_attach_controller" 00:33:18.474 }' 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:18.474 "params": { 00:33:18.474 "name": "Nvme1", 00:33:18.474 "trtype": "tcp", 00:33:18.474 "traddr": "10.0.0.2", 00:33:18.474 "adrfam": "ipv4", 00:33:18.474 "trsvcid": "4420", 00:33:18.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.474 "hdgst": false, 00:33:18.474 "ddgst": false 00:33:18.474 }, 00:33:18.474 "method": "bdev_nvme_attach_controller" 00:33:18.474 }' 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:18.474 18:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:18.474 "params": { 00:33:18.474 "name": "Nvme1", 00:33:18.474 "trtype": "tcp", 00:33:18.474 "traddr": "10.0.0.2", 00:33:18.474 "adrfam": "ipv4", 00:33:18.474 "trsvcid": "4420", 00:33:18.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.474 "hdgst": false, 00:33:18.474 "ddgst": false 00:33:18.474 }, 00:33:18.474 "method": "bdev_nvme_attach_controller" 00:33:18.474 }' 00:33:18.474 [2024-11-19 18:32:19.780682] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:33:18.474 [2024-11-19 18:32:19.780754] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:18.474 [2024-11-19 18:32:19.783210] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:33:18.474 [2024-11-19 18:32:19.783275] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:18.474 [2024-11-19 18:32:19.785441] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:33:18.474 [2024-11-19 18:32:19.785441] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:33:18.474 [2024-11-19 18:32:19.785515] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 18:32:19.785516] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:18.474 --proc-type=auto ] 00:33:18.737 [2024-11-19 18:32:19.996647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.737 [2024-11-19 18:32:20.040198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:18.737 [2024-11-19 18:32:20.090536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.737 [2024-11-19 18:32:20.133462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:18.737 [2024-11-19 18:32:20.189276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.998 [2024-11-19 18:32:20.228196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:18.998 [2024-11-19 18:32:20.242218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.998 [2024-11-19 18:32:20.282498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:18.998 Running I/O for 1 seconds... 00:33:18.998 Running I/O for 1 seconds... 00:33:18.998 Running I/O for 1 seconds... 00:33:19.260 Running I/O for 1 seconds... 00:33:20.205 11063.00 IOPS, 43.21 MiB/s 00:33:20.205 Latency(us) 00:33:20.205 [2024-11-19T17:32:21.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.206 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:20.206 Nvme1n1 : 1.01 11115.70 43.42 0.00 0.00 11467.95 2334.72 14636.37 00:33:20.206 [2024-11-19T17:32:21.677Z] =================================================================================================================== 00:33:20.206 [2024-11-19T17:32:21.677Z] Total : 11115.70 43.42 0.00 0.00 11467.95 2334.72 14636.37 00:33:20.206 10122.00 IOPS, 39.54 MiB/s 00:33:20.206 Latency(us) 00:33:20.206 [2024-11-19T17:32:21.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.206 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:20.206 Nvme1n1 : 1.01 10174.97 39.75 0.00 0.00 12528.92 5816.32 16711.68 00:33:20.206 [2024-11-19T17:32:21.677Z] =================================================================================================================== 00:33:20.206 [2024-11-19T17:32:21.677Z] Total : 10174.97 39.75 0.00 0.00 12528.92 5816.32 16711.68 00:33:20.206 187720.00 IOPS, 733.28 MiB/s 00:33:20.206 Latency(us) 00:33:20.206 [2024-11-19T17:32:21.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.206 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:20.206 Nvme1n1 : 1.00 187347.50 731.83 0.00 0.00 679.21 300.37 1966.08 00:33:20.206 [2024-11-19T17:32:21.677Z] =================================================================================================================== 00:33:20.206 [2024-11-19T17:32:21.677Z] Total : 187347.50 731.83 0.00 0.00 679.21 300.37 1966.08 00:33:20.206 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2229869 00:33:20.206 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2229872 00:33:20.206 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2229875 00:33:20.206 12008.00 IOPS, 46.91 MiB/s 00:33:20.206 Latency(us) 00:33:20.206 [2024-11-19T17:32:21.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.206 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:20.206 Nvme1n1 : 1.01 12096.35 47.25 0.00 0.00 10550.58 3467.95 19005.44 00:33:20.206 [2024-11-19T17:32:21.677Z] =================================================================================================================== 00:33:20.206 [2024-11-19T17:32:21.677Z] Total : 12096.35 47.25 0.00 0.00 10550.58 3467.95 19005.44 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.468 rmmod nvme_tcp 00:33:20.468 rmmod nvme_fabrics 00:33:20.468 rmmod nvme_keyring 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2229792 ']' 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2229792 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2229792 ']' 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2229792 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229792 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229792' 00:33:20.468 killing process with pid 2229792 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2229792 00:33:20.468 18:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2229792 00:33:20.729 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.729 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.729 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.729 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:20.729 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:20.729 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.729 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.729 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.729 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.730 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.730 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.730 18:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.645 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.645 00:33:22.645 real 0m13.260s 00:33:22.645 user 0m16.174s 00:33:22.645 sys 0m7.819s 00:33:22.645 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.645 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:22.645 ************************************ 00:33:22.645 END TEST nvmf_bdev_io_wait 00:33:22.645 ************************************ 00:33:22.908 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:22.908 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:22.908 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.908 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.908 ************************************ 00:33:22.908 START TEST nvmf_queue_depth 00:33:22.908 ************************************ 00:33:22.908 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:22.908 * Looking for test storage... 00:33:22.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.908 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:22.908 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:22.908 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:23.170 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:23.170 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.170 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:23.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.171 --rc genhtml_branch_coverage=1 00:33:23.171 --rc genhtml_function_coverage=1 00:33:23.171 --rc genhtml_legend=1 00:33:23.171 --rc geninfo_all_blocks=1 00:33:23.171 --rc geninfo_unexecuted_blocks=1 00:33:23.171 00:33:23.171 ' 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:23.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.171 --rc genhtml_branch_coverage=1 00:33:23.171 --rc genhtml_function_coverage=1 00:33:23.171 --rc genhtml_legend=1 00:33:23.171 --rc geninfo_all_blocks=1 00:33:23.171 --rc geninfo_unexecuted_blocks=1 00:33:23.171 00:33:23.171 ' 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:23.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.171 --rc genhtml_branch_coverage=1 00:33:23.171 --rc genhtml_function_coverage=1 00:33:23.171 --rc genhtml_legend=1 00:33:23.171 --rc geninfo_all_blocks=1 00:33:23.171 --rc geninfo_unexecuted_blocks=1 00:33:23.171 00:33:23.171 ' 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:23.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.171 --rc genhtml_branch_coverage=1 00:33:23.171 --rc genhtml_function_coverage=1 00:33:23.171 --rc genhtml_legend=1 00:33:23.171 --rc geninfo_all_blocks=1 00:33:23.171 --rc geninfo_unexecuted_blocks=1 00:33:23.171 00:33:23.171 ' 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.171 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.172 18:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:31.318 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.318 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:31.319 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:31.319 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:31.319 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:33:31.319 00:33:31.319 --- 10.0.0.2 ping statistics --- 00:33:31.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.319 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:33:31.319 00:33:31.319 --- 10.0.0.1 ping statistics --- 00:33:31.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.319 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2234571 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2234571 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2234571 ']' 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.319 18:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.319 [2024-11-19 18:32:31.960476] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:31.319 [2024-11-19 18:32:31.961643] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:33:31.319 [2024-11-19 18:32:31.961694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.319 [2024-11-19 18:32:32.049928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.320 [2024-11-19 18:32:32.101127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.320 [2024-11-19 18:32:32.101191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.320 [2024-11-19 18:32:32.101201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.320 [2024-11-19 18:32:32.101208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.320 [2024-11-19 18:32:32.101214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.320 [2024-11-19 18:32:32.101946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.320 [2024-11-19 18:32:32.178406] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:31.320 [2024-11-19 18:32:32.178716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.581 [2024-11-19 18:32:32.846811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.581 Malloc0 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.581 [2024-11-19 18:32:32.931012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2234702 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2234702 /var/tmp/bdevperf.sock 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2234702 ']' 00:33:31.581 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:31.582 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.582 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:31.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:31.582 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.582 18:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:31.582 [2024-11-19 18:32:32.992007] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:33:31.582 [2024-11-19 18:32:32.992076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234702 ] 00:33:31.843 [2024-11-19 18:32:33.082186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.843 [2024-11-19 18:32:33.134795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.416 18:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.416 18:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:32.416 18:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:32.416 18:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.416 18:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:32.677 NVMe0n1 00:33:32.677 18:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.677 18:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:32.677 Running I/O for 10 seconds... 00:33:35.003 8192.00 IOPS, 32.00 MiB/s [2024-11-19T17:32:37.416Z] 9205.00 IOPS, 35.96 MiB/s [2024-11-19T17:32:38.358Z] 10021.67 IOPS, 39.15 MiB/s [2024-11-19T17:32:39.311Z] 10796.00 IOPS, 42.17 MiB/s [2024-11-19T17:32:40.255Z] 11364.80 IOPS, 44.39 MiB/s [2024-11-19T17:32:41.200Z] 11750.00 IOPS, 45.90 MiB/s [2024-11-19T17:32:42.588Z] 11991.71 IOPS, 46.84 MiB/s [2024-11-19T17:32:43.529Z] 12180.75 IOPS, 47.58 MiB/s [2024-11-19T17:32:44.470Z] 12357.67 IOPS, 48.27 MiB/s [2024-11-19T17:32:44.470Z] 12496.60 IOPS, 48.81 MiB/s 00:33:42.999 Latency(us) 00:33:42.999 [2024-11-19T17:32:44.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.999 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:42.999 Verification LBA range: start 0x0 length 0x4000 00:33:42.999 NVMe0n1 : 10.06 12521.20 48.91 0.00 0.00 81496.44 25231.36 78643.20 00:33:42.999 [2024-11-19T17:32:44.470Z] =================================================================================================================== 00:33:42.999 [2024-11-19T17:32:44.470Z] Total : 12521.20 48.91 0.00 0.00 81496.44 25231.36 78643.20 00:33:42.999 { 00:33:42.999 "results": [ 00:33:43.000 { 00:33:43.000 "job": "NVMe0n1", 00:33:43.000 "core_mask": "0x1", 00:33:43.000 "workload": "verify", 00:33:43.000 "status": "finished", 00:33:43.000 "verify_range": { 00:33:43.000 "start": 0, 00:33:43.000 "length": 16384 00:33:43.000 }, 00:33:43.000 "queue_depth": 1024, 00:33:43.000 "io_size": 4096, 00:33:43.000 "runtime": 10.060377, 00:33:43.000 "iops": 12521.20074625434, 00:33:43.000 "mibps": 48.91094041505602, 00:33:43.000 "io_failed": 0, 00:33:43.000 "io_timeout": 0, 00:33:43.000 "avg_latency_us": 81496.44404928236, 00:33:43.000 "min_latency_us": 25231.36, 00:33:43.000 "max_latency_us": 78643.2 00:33:43.000 } 00:33:43.000 ], 00:33:43.000 "core_count": 1 00:33:43.000 } 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2234702 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2234702 ']' 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2234702 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234702 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234702' 00:33:43.000 killing process with pid 2234702 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2234702 00:33:43.000 Received shutdown signal, test time was about 10.000000 seconds 00:33:43.000 00:33:43.000 Latency(us) 00:33:43.000 [2024-11-19T17:32:44.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.000 [2024-11-19T17:32:44.471Z] =================================================================================================================== 00:33:43.000 [2024-11-19T17:32:44.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2234702 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.000 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.000 rmmod nvme_tcp 00:33:43.000 rmmod nvme_fabrics 00:33:43.000 rmmod nvme_keyring 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2234571 ']' 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2234571 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2234571 ']' 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2234571 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.260 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234571 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234571' 00:33:43.261 killing process with pid 2234571 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2234571 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2234571 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.261 18:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:45.805 00:33:45.805 real 0m22.550s 00:33:45.805 user 0m24.775s 00:33:45.805 sys 0m7.498s 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:45.805 ************************************ 00:33:45.805 END TEST nvmf_queue_depth 00:33:45.805 ************************************ 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:45.805 ************************************ 00:33:45.805 START TEST nvmf_target_multipath 00:33:45.805 ************************************ 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:45.805 * Looking for test storage... 00:33:45.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:45.805 18:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:45.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.805 --rc genhtml_branch_coverage=1 00:33:45.805 --rc genhtml_function_coverage=1 00:33:45.805 --rc genhtml_legend=1 00:33:45.805 --rc geninfo_all_blocks=1 00:33:45.805 --rc geninfo_unexecuted_blocks=1 00:33:45.805 00:33:45.805 ' 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:45.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.805 --rc genhtml_branch_coverage=1 00:33:45.805 --rc genhtml_function_coverage=1 00:33:45.805 --rc genhtml_legend=1 00:33:45.805 --rc geninfo_all_blocks=1 00:33:45.805 --rc geninfo_unexecuted_blocks=1 00:33:45.805 00:33:45.805 ' 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:45.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.805 --rc genhtml_branch_coverage=1 00:33:45.805 --rc genhtml_function_coverage=1 00:33:45.805 --rc genhtml_legend=1 00:33:45.805 --rc geninfo_all_blocks=1 00:33:45.805 --rc geninfo_unexecuted_blocks=1 00:33:45.805 00:33:45.805 ' 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:45.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.805 --rc genhtml_branch_coverage=1 00:33:45.805 --rc genhtml_function_coverage=1 00:33:45.805 --rc genhtml_legend=1 00:33:45.805 --rc geninfo_all_blocks=1 00:33:45.805 --rc geninfo_unexecuted_blocks=1 00:33:45.805 00:33:45.805 ' 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.805 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:45.806 18:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:53.956 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:53.956 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:53.956 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:53.956 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:53.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:53.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:33:53.957 00:33:53.957 --- 10.0.0.2 ping statistics --- 00:33:53.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.957 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:33:53.957 00:33:53.957 --- 10.0.0.1 ping statistics --- 00:33:53.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.957 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:53.957 only one NIC for nvmf test 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.957 rmmod nvme_tcp 00:33:53.957 rmmod nvme_fabrics 00:33:53.957 rmmod nvme_keyring 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.957 18:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:55.343 00:33:55.343 real 0m9.962s 00:33:55.343 user 0m2.171s 00:33:55.343 sys 0m5.730s 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:55.343 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:55.343 ************************************ 00:33:55.343 END TEST nvmf_target_multipath 00:33:55.343 ************************************ 00:33:55.605 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:55.605 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:55.605 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:55.605 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:55.605 ************************************ 00:33:55.605 START TEST nvmf_zcopy 00:33:55.605 ************************************ 00:33:55.605 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:55.605 * Looking for test storage... 00:33:55.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:55.605 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:55.605 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:55.605 18:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:55.605 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:55.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.868 --rc genhtml_branch_coverage=1 00:33:55.868 --rc genhtml_function_coverage=1 00:33:55.868 --rc genhtml_legend=1 00:33:55.868 --rc geninfo_all_blocks=1 00:33:55.868 --rc geninfo_unexecuted_blocks=1 00:33:55.868 00:33:55.868 ' 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:55.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.868 --rc genhtml_branch_coverage=1 00:33:55.868 --rc genhtml_function_coverage=1 00:33:55.868 --rc genhtml_legend=1 00:33:55.868 --rc geninfo_all_blocks=1 00:33:55.868 --rc geninfo_unexecuted_blocks=1 00:33:55.868 00:33:55.868 ' 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:55.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.868 --rc genhtml_branch_coverage=1 00:33:55.868 --rc genhtml_function_coverage=1 00:33:55.868 --rc genhtml_legend=1 00:33:55.868 --rc geninfo_all_blocks=1 00:33:55.868 --rc geninfo_unexecuted_blocks=1 00:33:55.868 00:33:55.868 ' 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:55.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.868 --rc genhtml_branch_coverage=1 00:33:55.868 --rc genhtml_function_coverage=1 00:33:55.868 --rc genhtml_legend=1 00:33:55.868 --rc geninfo_all_blocks=1 00:33:55.868 --rc geninfo_unexecuted_blocks=1 00:33:55.868 00:33:55.868 ' 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:55.868 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:55.869 18:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:04.013 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:04.013 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:04.013 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:04.013 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:04.013 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:04.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:34:04.014 00:34:04.014 --- 10.0.0.2 ping statistics --- 00:34:04.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.014 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:04.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:34:04.014 00:34:04.014 --- 10.0.0.1 ping statistics --- 00:34:04.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.014 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2245361 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2245361 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2245361 ']' 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.014 18:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.014 [2024-11-19 18:33:04.660650] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:04.014 [2024-11-19 18:33:04.661769] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:34:04.014 [2024-11-19 18:33:04.661817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.014 [2024-11-19 18:33:04.760874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.014 [2024-11-19 18:33:04.811477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.014 [2024-11-19 18:33:04.811526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.014 [2024-11-19 18:33:04.811535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.014 [2024-11-19 18:33:04.811543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.014 [2024-11-19 18:33:04.811549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.014 [2024-11-19 18:33:04.812291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.014 [2024-11-19 18:33:04.887872] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:04.014 [2024-11-19 18:33:04.888156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:04.014 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.014 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:04.014 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:04.014 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:04.014 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.276 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.277 [2024-11-19 18:33:05.525142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.277 [2024-11-19 18:33:05.553442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.277 malloc0 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:04.277 { 00:34:04.277 "params": { 00:34:04.277 "name": "Nvme$subsystem", 00:34:04.277 "trtype": "$TEST_TRANSPORT", 00:34:04.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.277 "adrfam": "ipv4", 00:34:04.277 "trsvcid": "$NVMF_PORT", 00:34:04.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.277 "hdgst": ${hdgst:-false}, 00:34:04.277 "ddgst": ${ddgst:-false} 00:34:04.277 }, 00:34:04.277 "method": "bdev_nvme_attach_controller" 00:34:04.277 } 00:34:04.277 EOF 00:34:04.277 )") 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:04.277 18:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:04.277 "params": { 00:34:04.277 "name": "Nvme1", 00:34:04.277 "trtype": "tcp", 00:34:04.277 "traddr": "10.0.0.2", 00:34:04.277 "adrfam": "ipv4", 00:34:04.277 "trsvcid": "4420", 00:34:04.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:04.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:04.277 "hdgst": false, 00:34:04.277 "ddgst": false 00:34:04.277 }, 00:34:04.277 "method": "bdev_nvme_attach_controller" 00:34:04.277 }' 00:34:04.277 [2024-11-19 18:33:05.655246] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:34:04.277 [2024-11-19 18:33:05.655324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245396 ] 00:34:04.539 [2024-11-19 18:33:05.748794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.539 [2024-11-19 18:33:05.803250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.800 Running I/O for 10 seconds... 00:34:06.689 6409.00 IOPS, 50.07 MiB/s [2024-11-19T17:33:09.103Z] 6467.00 IOPS, 50.52 MiB/s [2024-11-19T17:33:10.086Z] 6494.67 IOPS, 50.74 MiB/s [2024-11-19T17:33:11.143Z] 6498.75 IOPS, 50.77 MiB/s [2024-11-19T17:33:12.087Z] 6644.20 IOPS, 51.91 MiB/s [2024-11-19T17:33:13.472Z] 7148.33 IOPS, 55.85 MiB/s [2024-11-19T17:33:14.042Z] 7515.57 IOPS, 58.72 MiB/s [2024-11-19T17:33:15.427Z] 7783.25 IOPS, 60.81 MiB/s [2024-11-19T17:33:16.369Z] 7992.67 IOPS, 62.44 MiB/s [2024-11-19T17:33:16.369Z] 8164.30 IOPS, 63.78 MiB/s 00:34:14.898 Latency(us) 00:34:14.898 [2024-11-19T17:33:16.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:14.898 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:14.898 Verification LBA range: start 0x0 length 0x1000 00:34:14.898 Nvme1n1 : 10.01 8168.56 63.82 0.00 0.00 15622.13 2566.83 27415.89 00:34:14.898 [2024-11-19T17:33:16.369Z] =================================================================================================================== 00:34:14.898 [2024-11-19T17:33:16.369Z] Total : 8168.56 63.82 0.00 0.00 15622.13 2566.83 27415.89 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2247861 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:14.898 { 00:34:14.898 "params": { 00:34:14.898 "name": "Nvme$subsystem", 00:34:14.898 "trtype": "$TEST_TRANSPORT", 00:34:14.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.898 "adrfam": "ipv4", 00:34:14.898 "trsvcid": "$NVMF_PORT", 00:34:14.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.898 "hdgst": ${hdgst:-false}, 00:34:14.898 "ddgst": ${ddgst:-false} 00:34:14.898 }, 00:34:14.898 "method": "bdev_nvme_attach_controller" 00:34:14.898 } 00:34:14.898 EOF 00:34:14.898 )") 00:34:14.898 [2024-11-19 18:33:16.148693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.898 [2024-11-19 18:33:16.148727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:14.898 18:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:14.898 "params": { 00:34:14.898 "name": "Nvme1", 00:34:14.898 "trtype": "tcp", 00:34:14.898 "traddr": "10.0.0.2", 00:34:14.898 "adrfam": "ipv4", 00:34:14.898 "trsvcid": "4420", 00:34:14.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:14.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:14.898 "hdgst": false, 00:34:14.898 "ddgst": false 00:34:14.898 }, 00:34:14.898 "method": "bdev_nvme_attach_controller" 00:34:14.898 }' 00:34:14.898 [2024-11-19 18:33:16.160664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.898 [2024-11-19 18:33:16.160673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.898 [2024-11-19 18:33:16.172662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.898 [2024-11-19 18:33:16.172671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.898 [2024-11-19 18:33:16.184661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.898 [2024-11-19 18:33:16.184669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.898 [2024-11-19 18:33:16.192947] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:34:14.898 [2024-11-19 18:33:16.192995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247861 ] 00:34:14.898 [2024-11-19 18:33:16.196662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.898 [2024-11-19 18:33:16.196670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.208661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.208670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.220662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.220669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.232661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.232669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.244662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.244670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.256661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.256669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.268661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.268669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.274343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.899 [2024-11-19 18:33:16.280661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.280669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.292662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.292670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.303831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.899 [2024-11-19 18:33:16.304662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.304674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.316665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.316673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.328664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.328678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.340663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.340673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.352663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.352673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:14.899 [2024-11-19 18:33:16.364662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:14.899 [2024-11-19 18:33:16.364672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.376670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.376686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.388664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.388675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.400662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.400672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.412662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.412669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.424661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.424669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.436662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.436671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.448662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.448673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.460663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.460674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.472666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.472681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 Running I/O for 5 seconds... 00:34:15.160 [2024-11-19 18:33:16.487719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.487741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.500852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.500871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.513760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.513779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.528098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.528116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.541051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.541068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.555602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.555619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.568831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.568847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.581854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.581872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.595946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.595964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.609129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.609145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.160 [2024-11-19 18:33:16.623752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.160 [2024-11-19 18:33:16.623769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.637086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.637103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.651964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.651982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.665075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.665092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.680192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.680209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.693400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.693416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.708095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.708113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.721079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.721095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.735827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.735845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.748802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.748819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.761055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.761072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.775818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.775834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.788902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.788918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.802203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.802220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.816195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.816212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.829248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.829264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.843854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.843870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.856903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.856920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.869639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.869656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.421 [2024-11-19 18:33:16.883800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.421 [2024-11-19 18:33:16.883816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:16.896896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:16.896912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:16.909882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:16.909899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:16.924073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:16.924089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:16.936908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:16.936926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:16.950082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:16.950099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:16.964099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:16.964115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:16.977529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:16.977545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:16.992327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:16.992344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.005615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.005631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.020630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.020646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.033911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.033927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.047978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.047994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.061022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.061038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.076070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.076086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.089363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.089386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.103803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.103820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.116889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.116906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.129381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.129397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.683 [2024-11-19 18:33:17.143950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.683 [2024-11-19 18:33:17.143967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.944 [2024-11-19 18:33:17.157015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.944 [2024-11-19 18:33:17.157031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.944 [2024-11-19 18:33:17.172676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.944 [2024-11-19 18:33:17.172693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.944 [2024-11-19 18:33:17.185502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.944 [2024-11-19 18:33:17.185520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.944 [2024-11-19 18:33:17.200062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.944 [2024-11-19 18:33:17.200079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.944 [2024-11-19 18:33:17.213053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.944 [2024-11-19 18:33:17.213070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.944 [2024-11-19 18:33:17.227525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.227541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.240608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.240624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.253426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.253443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.268013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.268029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.281464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.281480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.296592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.296607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.309562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.309578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.323883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.323899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.336707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.336723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.349223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.349238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.363853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.363869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.377212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.377229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.391604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.391621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:15.945 [2024-11-19 18:33:17.404713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:15.945 [2024-11-19 18:33:17.404730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.417510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.417528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.432032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.432048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.444942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.444958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.457701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.457718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.472198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.472214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 18860.00 IOPS, 147.34 MiB/s [2024-11-19T17:33:17.678Z] [2024-11-19 18:33:17.485032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.485048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.499694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.499711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.512909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.512924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.525422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.525439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.539915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.539933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.553141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.553156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.567791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.567812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.581279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.581295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.596127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.596143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.609531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.609547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.623696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.623712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.637267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.637282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.651851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.651867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.207 [2024-11-19 18:33:17.664847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.207 [2024-11-19 18:33:17.664863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.677578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.677597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.691826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.691842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.704932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.704947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.719953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.719969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.733260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.733278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.747917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.747934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.761014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.761030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.775902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.775918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.789065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.789085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.804555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.804571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.817758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.817774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.831748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.831768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.844895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.844912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.857234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.857249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.872261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.872278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.885306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.885322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.899609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.899625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.912706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.912723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.468 [2024-11-19 18:33:17.925636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.468 [2024-11-19 18:33:17.925651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.728 [2024-11-19 18:33:17.940009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.728 [2024-11-19 18:33:17.940025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.728 [2024-11-19 18:33:17.952778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.728 [2024-11-19 18:33:17.952794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.728 [2024-11-19 18:33:17.965933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.728 [2024-11-19 18:33:17.965949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.728 [2024-11-19 18:33:17.980022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.728 [2024-11-19 18:33:17.980039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:17.993326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:17.993341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.007548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.007564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.020506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.020521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.033707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.033723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.048012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.048028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.061191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.061207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.076123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.076138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.089411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.089431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.104207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.104222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.117592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.117607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.132333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.132349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.145767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.145783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.159730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.159746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.172857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.172873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.729 [2024-11-19 18:33:18.185462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.729 [2024-11-19 18:33:18.185477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.199559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.199575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.212803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.212820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.226210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.226227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.239602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.239618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.252722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.252738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.266062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.266079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.279973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.279990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.293206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.293222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.308284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.308300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.321757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.321774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.336060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.336077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.349341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.349357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.363859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.363875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.377147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.377169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.391954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.391972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.405262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.405278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.419870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.419886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.433068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.433084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:16.989 [2024-11-19 18:33:18.448131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:16.989 [2024-11-19 18:33:18.448148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.250 [2024-11-19 18:33:18.461543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.250 [2024-11-19 18:33:18.461560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.250 [2024-11-19 18:33:18.475857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.250 [2024-11-19 18:33:18.475876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.250 18862.00 IOPS, 147.36 MiB/s [2024-11-19T17:33:18.721Z] [2024-11-19 18:33:18.488129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.250 [2024-11-19 18:33:18.488146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.250 [2024-11-19 18:33:18.500921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.250 [2024-11-19 18:33:18.500938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.250 [2024-11-19 18:33:18.513640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.250 [2024-11-19 18:33:18.513655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.250 [2024-11-19 18:33:18.527925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.250 [2024-11-19 18:33:18.527943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.250 [2024-11-19 18:33:18.541057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.250 [2024-11-19 18:33:18.541073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.250 [2024-11-19 18:33:18.555593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.250 [2024-11-19 18:33:18.555612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.568921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.568938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.580972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.580989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.596071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.596088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.609367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.609383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.624024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.624041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.637278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.637295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.652288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.652306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.665899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.665915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.680343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.680361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.693354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.693371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.251 [2024-11-19 18:33:18.707530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.251 [2024-11-19 18:33:18.707546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.720640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.720657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.733257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.733273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.748029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.748046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.761301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.761317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.776148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.776170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.789109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.789127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.804153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.804179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.817510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.817526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.832231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.832248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.845406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.845421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.860004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.860025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.873128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.873144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.888297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.888314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.511 [2024-11-19 18:33:18.901605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.511 [2024-11-19 18:33:18.901621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.512 [2024-11-19 18:33:18.915646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.512 [2024-11-19 18:33:18.915662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.512 [2024-11-19 18:33:18.928897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.512 [2024-11-19 18:33:18.928913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.512 [2024-11-19 18:33:18.941683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.512 [2024-11-19 18:33:18.941699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.512 [2024-11-19 18:33:18.955392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.512 [2024-11-19 18:33:18.955408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.512 [2024-11-19 18:33:18.968584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.512 [2024-11-19 18:33:18.968601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:18.981206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:18.981222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:18.995655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:18.995671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.008887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.008904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.021893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.021909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.035862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.035878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.048920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.048936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.061828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.061844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.075865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.075882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.089330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.089345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.104101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.104118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.117409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.117428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.132111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.132127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.145041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.145058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.159795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.159811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.172612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.172628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.185169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.185184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.199794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.199811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.212969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.212985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:17.772 [2024-11-19 18:33:19.227598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:17.772 [2024-11-19 18:33:19.227615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.032 [2024-11-19 18:33:19.240681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.032 [2024-11-19 18:33:19.240697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.032 [2024-11-19 18:33:19.253538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.032 [2024-11-19 18:33:19.253554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.032 [2024-11-19 18:33:19.267910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.032 [2024-11-19 18:33:19.267928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.032 [2024-11-19 18:33:19.280858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.032 [2024-11-19 18:33:19.280874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.032 [2024-11-19 18:33:19.294079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.032 [2024-11-19 18:33:19.294095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.032 [2024-11-19 18:33:19.307908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.032 [2024-11-19 18:33:19.307923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.032 [2024-11-19 18:33:19.320967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.032 [2024-11-19 18:33:19.320982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.336010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.336025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.349232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.349248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.363883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.363899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.376969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.376988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.391829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.391845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.405094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.405110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.420399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.420415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.433809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.433825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.447841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.447857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.461092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.461107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 [2024-11-19 18:33:19.475830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.475846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.033 18876.33 IOPS, 147.47 MiB/s [2024-11-19T17:33:19.504Z] [2024-11-19 18:33:19.489011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.033 [2024-11-19 18:33:19.489027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.503645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.503662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.516921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.516937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.529707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.529722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.543724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.543740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.557071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.557087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.571569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.571585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.584711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.584727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.597384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.597401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.612053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.612069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.625051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.625067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.640255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.640271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.653447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.653463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.667421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.667436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.293 [2024-11-19 18:33:19.680574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.293 [2024-11-19 18:33:19.680589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.294 [2024-11-19 18:33:19.693426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.294 [2024-11-19 18:33:19.693442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.294 [2024-11-19 18:33:19.707736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.294 [2024-11-19 18:33:19.707751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.294 [2024-11-19 18:33:19.720882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.294 [2024-11-19 18:33:19.720898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.294 [2024-11-19 18:33:19.733682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.294 [2024-11-19 18:33:19.733698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.294 [2024-11-19 18:33:19.748227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.294 [2024-11-19 18:33:19.748244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.761367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.761384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.776067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.776083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.789677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.789692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.804173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.804190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.817376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.817392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.832514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.832530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.845800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.845817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.860354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.860370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.873536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.873551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.887645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.887661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.900718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.900734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.913734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.913751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.928343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.928359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.941647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.941662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.956279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.956295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.969462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.969479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.984139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.984155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:19.997373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:19.997389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.555 [2024-11-19 18:33:20.012238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.555 [2024-11-19 18:33:20.012256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.025255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.025272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.039984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.040000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.053391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.053407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.067653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.067673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.081273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.081290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.095900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.095916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.108984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.109000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.123995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.124013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.137563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.137580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.151983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.816 [2024-11-19 18:33:20.151999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.816 [2024-11-19 18:33:20.165348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.817 [2024-11-19 18:33:20.165364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.817 [2024-11-19 18:33:20.180182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.817 [2024-11-19 18:33:20.180198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.817 [2024-11-19 18:33:20.193293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.817 [2024-11-19 18:33:20.193309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.817 [2024-11-19 18:33:20.207733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.817 [2024-11-19 18:33:20.207749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.817 [2024-11-19 18:33:20.220790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.817 [2024-11-19 18:33:20.220807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.817 [2024-11-19 18:33:20.233655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.817 [2024-11-19 18:33:20.233671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.817 [2024-11-19 18:33:20.247687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.817 [2024-11-19 18:33:20.247703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.817 [2024-11-19 18:33:20.260949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.817 [2024-11-19 18:33:20.260965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:18.817 [2024-11-19 18:33:20.275561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:18.817 [2024-11-19 18:33:20.275578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.288905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.288922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.301862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.301877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.316094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.316110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.329589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.329606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.343560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.343576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.356573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.356589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.369779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.369796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.384134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.384152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.397672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.397689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.412599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.412616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.425967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.425985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.440022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.440038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.453265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.453280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.467961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.467977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.481211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.481228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 18868.00 IOPS, 147.41 MiB/s [2024-11-19T17:33:20.548Z] [2024-11-19 18:33:20.495909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.495925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.509122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.509138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.524247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.524263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.077 [2024-11-19 18:33:20.537775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.077 [2024-11-19 18:33:20.537793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.551974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.551991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.564959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.564975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.580100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.580116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.593215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.593231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.608409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.608426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.621451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.621467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.635793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.635810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.649054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.649071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.663948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.663964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.677125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.677146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.692192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.692209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.705520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.705538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.719628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.719644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.732727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.732744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.745876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.745893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.760550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.760566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.773618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.773635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.787748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.787765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.800486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.800502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.813453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.813468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.827792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.827809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.378 [2024-11-19 18:33:20.840956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.378 [2024-11-19 18:33:20.840971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.855845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.855862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.868882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.868899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.881836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.881852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.896021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.896037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.909134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.909150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.923933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.923950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.936846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.936869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.949722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.949738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.963910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.963926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.977028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.977043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:20.991363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:20.991378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:21.004529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:21.004544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:21.017506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:21.017522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:21.031520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:21.031536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:21.044429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:21.044444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:21.056954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:21.056969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:21.071972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:21.071988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:21.085132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:21.085147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.639 [2024-11-19 18:33:21.099716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.639 [2024-11-19 18:33:21.099732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.112861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.112878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.125992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.126007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.139916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.139932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.152730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.152748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.165699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.165716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.179587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.179603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.192880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.192901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.205906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.205923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.219805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.219821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.232944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.232958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.247839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.247855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.261079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.261096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.276101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.276117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.289263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.289278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.303802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.303819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.317212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.317228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.332193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.332209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.345705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.345722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.900 [2024-11-19 18:33:21.360473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.900 [2024-11-19 18:33:21.360489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.373852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.373868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.388651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.388668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.401824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.401840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.415578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.415594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.428747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.428763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.441605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.441623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.456245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.456261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.469553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.469569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.484028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.484044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 18862.40 IOPS, 147.36 MiB/s [2024-11-19T17:33:21.633Z] [2024-11-19 18:33:21.493907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.493922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 00:34:20.162 Latency(us) 00:34:20.162 [2024-11-19T17:33:21.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.162 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:20.162 Nvme1n1 : 5.01 18864.51 147.38 0.00 0.00 6779.03 2034.35 11523.41 00:34:20.162 [2024-11-19T17:33:21.633Z] =================================================================================================================== 00:34:20.162 [2024-11-19T17:33:21.633Z] Total : 18864.51 147.38 0.00 0.00 6779.03 2034.35 11523.41 00:34:20.162 [2024-11-19 18:33:21.504667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.504681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.516674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.516689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.528668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.528680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.540667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.540679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.552664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.552674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.564663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.564671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.576663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.576672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 [2024-11-19 18:33:21.588664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.162 [2024-11-19 18:33:21.588673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2247861) - No such process 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2247861 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:20.162 delay0 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.162 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:20.421 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.421 18:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:20.421 [2024-11-19 18:33:21.796332] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:27.001 Initializing NVMe Controllers 00:34:27.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:27.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:27.001 Initialization complete. Launching workers. 00:34:27.001 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3998 00:34:27.001 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4285, failed to submit 33 00:34:27.001 success 4122, unsuccessful 163, failed 0 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:27.001 rmmod nvme_tcp 00:34:27.001 rmmod nvme_fabrics 00:34:27.001 rmmod nvme_keyring 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2245361 ']' 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2245361 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2245361 ']' 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2245361 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2245361 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2245361' 00:34:27.001 killing process with pid 2245361 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2245361 00:34:27.001 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2245361 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.262 18:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.177 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:29.177 00:34:29.177 real 0m33.770s 00:34:29.177 user 0m42.834s 00:34:29.177 sys 0m12.317s 00:34:29.177 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.438 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:29.438 ************************************ 00:34:29.438 END TEST nvmf_zcopy 00:34:29.438 ************************************ 00:34:29.438 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:29.438 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:29.438 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.438 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:29.438 ************************************ 00:34:29.438 START TEST nvmf_nmic 00:34:29.438 ************************************ 00:34:29.438 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:29.438 * Looking for test storage... 00:34:29.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:29.438 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:29.438 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:29.438 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:29.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.701 --rc genhtml_branch_coverage=1 00:34:29.701 --rc genhtml_function_coverage=1 00:34:29.701 --rc genhtml_legend=1 00:34:29.701 --rc geninfo_all_blocks=1 00:34:29.701 --rc geninfo_unexecuted_blocks=1 00:34:29.701 00:34:29.701 ' 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:29.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.701 --rc genhtml_branch_coverage=1 00:34:29.701 --rc genhtml_function_coverage=1 00:34:29.701 --rc genhtml_legend=1 00:34:29.701 --rc geninfo_all_blocks=1 00:34:29.701 --rc geninfo_unexecuted_blocks=1 00:34:29.701 00:34:29.701 ' 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:29.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.701 --rc genhtml_branch_coverage=1 00:34:29.701 --rc genhtml_function_coverage=1 00:34:29.701 --rc genhtml_legend=1 00:34:29.701 --rc geninfo_all_blocks=1 00:34:29.701 --rc geninfo_unexecuted_blocks=1 00:34:29.701 00:34:29.701 ' 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:29.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:29.701 --rc genhtml_branch_coverage=1 00:34:29.701 --rc genhtml_function_coverage=1 00:34:29.701 --rc genhtml_legend=1 00:34:29.701 --rc geninfo_all_blocks=1 00:34:29.701 --rc geninfo_unexecuted_blocks=1 00:34:29.701 00:34:29.701 ' 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.701 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:29.702 18:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:37.845 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:37.845 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:37.845 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:37.845 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:37.846 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:37.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:37.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:34:37.846 00:34:37.846 --- 10.0.0.2 ping statistics --- 00:34:37.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.846 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:37.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:37.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:34:37.846 00:34:37.846 --- 10.0.0.1 ping statistics --- 00:34:37.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:37.846 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2254258 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2254258 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2254258 ']' 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:37.846 18:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:37.846 [2024-11-19 18:33:38.492482] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:37.846 [2024-11-19 18:33:38.493610] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:34:37.846 [2024-11-19 18:33:38.493664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.846 [2024-11-19 18:33:38.594480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:37.846 [2024-11-19 18:33:38.650323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:37.846 [2024-11-19 18:33:38.650377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:37.846 [2024-11-19 18:33:38.650385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:37.846 [2024-11-19 18:33:38.650393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:37.846 [2024-11-19 18:33:38.650399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:37.846 [2024-11-19 18:33:38.652816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:37.846 [2024-11-19 18:33:38.652975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:37.846 [2024-11-19 18:33:38.653137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.846 [2024-11-19 18:33:38.653136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:37.846 [2024-11-19 18:33:38.730552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:37.846 [2024-11-19 18:33:38.731757] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:37.846 [2024-11-19 18:33:38.731759] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:37.846 [2024-11-19 18:33:38.731859] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:37.846 [2024-11-19 18:33:38.731976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:37.846 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.846 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:37.846 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:37.846 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:37.847 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 [2024-11-19 18:33:39.350017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 Malloc0 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 [2024-11-19 18:33:39.458129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:38.108 test case1: single bdev can't be used in multiple subsystems 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 [2024-11-19 18:33:39.493632] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:38.108 [2024-11-19 18:33:39.493659] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:38.108 [2024-11-19 18:33:39.493668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:38.108 request: 00:34:38.108 { 00:34:38.108 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:38.108 "namespace": { 00:34:38.108 "bdev_name": "Malloc0", 00:34:38.108 "no_auto_visible": false 00:34:38.108 }, 00:34:38.108 "method": "nvmf_subsystem_add_ns", 00:34:38.108 "req_id": 1 00:34:38.108 } 00:34:38.108 Got JSON-RPC error response 00:34:38.108 response: 00:34:38.108 { 00:34:38.108 "code": -32602, 00:34:38.108 "message": "Invalid parameters" 00:34:38.108 } 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:38.108 Adding namespace failed - expected result. 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:38.108 test case2: host connect to nvmf target in multiple paths 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:38.108 [2024-11-19 18:33:39.505784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.108 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:38.681 18:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:38.942 18:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:38.942 18:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:38.942 18:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:38.942 18:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:38.942 18:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:41.488 18:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:41.488 18:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:41.488 18:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:41.488 18:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:41.488 18:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:41.488 18:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:41.488 18:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:41.488 [global] 00:34:41.488 thread=1 00:34:41.488 invalidate=1 00:34:41.488 rw=write 00:34:41.488 time_based=1 00:34:41.488 runtime=1 00:34:41.488 ioengine=libaio 00:34:41.488 direct=1 00:34:41.488 bs=4096 00:34:41.488 iodepth=1 00:34:41.488 norandommap=0 00:34:41.488 numjobs=1 00:34:41.488 00:34:41.488 verify_dump=1 00:34:41.488 verify_backlog=512 00:34:41.488 verify_state_save=0 00:34:41.488 do_verify=1 00:34:41.488 verify=crc32c-intel 00:34:41.488 [job0] 00:34:41.488 filename=/dev/nvme0n1 00:34:41.488 Could not set queue depth (nvme0n1) 00:34:41.488 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:41.488 fio-3.35 00:34:41.488 Starting 1 thread 00:34:42.872 00:34:42.872 job0: (groupid=0, jobs=1): err= 0: pid=2255363: Tue Nov 19 18:33:43 2024 00:34:42.872 read: IOPS=655, BW=2621KiB/s (2684kB/s)(2624KiB/1001msec) 00:34:42.872 slat (nsec): min=7140, max=60005, avg=23921.93, stdev=7582.05 00:34:42.872 clat (usec): min=432, max=870, avg=748.80, stdev=55.22 00:34:42.872 lat (usec): min=442, max=897, avg=772.72, stdev=57.28 00:34:42.872 clat percentiles (usec): 00:34:42.872 | 1.00th=[ 603], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 693], 00:34:42.872 | 30.00th=[ 750], 40.00th=[ 758], 50.00th=[ 766], 60.00th=[ 766], 00:34:42.872 | 70.00th=[ 775], 80.00th=[ 783], 90.00th=[ 799], 95.00th=[ 816], 00:34:42.872 | 99.00th=[ 857], 99.50th=[ 857], 99.90th=[ 873], 99.95th=[ 873], 00:34:42.872 | 99.99th=[ 873] 00:34:42.872 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:42.872 slat (usec): min=10, max=31998, avg=59.43, stdev=999.12 00:34:42.872 clat (usec): min=178, max=2394, avg=411.31, stdev=90.36 00:34:42.872 lat (usec): min=189, max=32343, avg=470.74, stdev=1001.45 00:34:42.872 clat percentiles (usec): 00:34:42.872 | 1.00th=[ 239], 5.00th=[ 273], 10.00th=[ 322], 20.00th=[ 347], 00:34:42.872 | 30.00th=[ 375], 40.00th=[ 412], 50.00th=[ 429], 60.00th=[ 445], 00:34:42.872 | 70.00th=[ 461], 80.00th=[ 469], 90.00th=[ 474], 95.00th=[ 482], 00:34:42.872 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 2409], 00:34:42.872 | 99.99th=[ 2409] 00:34:42.872 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.872 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.872 lat (usec) : 250=1.37%, 500=58.75%, 750=13.10%, 1000=26.73% 00:34:42.872 lat (msec) : 4=0.06% 00:34:42.872 cpu : usr=2.00%, sys=4.90%, ctx=1683, majf=0, minf=1 00:34:42.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.872 issued rwts: total=656,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.872 00:34:42.872 Run status group 0 (all jobs): 00:34:42.872 READ: bw=2621KiB/s (2684kB/s), 2621KiB/s-2621KiB/s (2684kB/s-2684kB/s), io=2624KiB (2687kB), run=1001-1001msec 00:34:42.872 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:34:42.872 00:34:42.872 Disk stats (read/write): 00:34:42.872 nvme0n1: ios=552/1024, merge=0/0, ticks=1354/412, in_queue=1766, util=98.90% 00:34:42.872 18:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:42.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:42.873 rmmod nvme_tcp 00:34:42.873 rmmod nvme_fabrics 00:34:42.873 rmmod nvme_keyring 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2254258 ']' 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2254258 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2254258 ']' 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2254258 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2254258 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2254258' 00:34:42.873 killing process with pid 2254258 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2254258 00:34:42.873 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2254258 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:43.134 18:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.046 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.046 00:34:45.046 real 0m15.734s 00:34:45.046 user 0m35.641s 00:34:45.046 sys 0m7.390s 00:34:45.046 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.046 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:45.046 ************************************ 00:34:45.046 END TEST nvmf_nmic 00:34:45.046 ************************************ 00:34:45.046 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:45.046 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:45.046 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:45.046 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:45.307 ************************************ 00:34:45.307 START TEST nvmf_fio_target 00:34:45.307 ************************************ 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:45.307 * Looking for test storage... 00:34:45.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:45.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.307 --rc genhtml_branch_coverage=1 00:34:45.307 --rc genhtml_function_coverage=1 00:34:45.307 --rc genhtml_legend=1 00:34:45.307 --rc geninfo_all_blocks=1 00:34:45.307 --rc geninfo_unexecuted_blocks=1 00:34:45.307 00:34:45.307 ' 00:34:45.307 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:45.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.307 --rc genhtml_branch_coverage=1 00:34:45.307 --rc genhtml_function_coverage=1 00:34:45.307 --rc genhtml_legend=1 00:34:45.308 --rc geninfo_all_blocks=1 00:34:45.308 --rc geninfo_unexecuted_blocks=1 00:34:45.308 00:34:45.308 ' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:45.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.308 --rc genhtml_branch_coverage=1 00:34:45.308 --rc genhtml_function_coverage=1 00:34:45.308 --rc genhtml_legend=1 00:34:45.308 --rc geninfo_all_blocks=1 00:34:45.308 --rc geninfo_unexecuted_blocks=1 00:34:45.308 00:34:45.308 ' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:45.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:45.308 --rc genhtml_branch_coverage=1 00:34:45.308 --rc genhtml_function_coverage=1 00:34:45.308 --rc genhtml_legend=1 00:34:45.308 --rc geninfo_all_blocks=1 00:34:45.308 --rc geninfo_unexecuted_blocks=1 00:34:45.308 00:34:45.308 ' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:45.308 18:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:53.451 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:53.451 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:53.451 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:53.451 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.451 18:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.451 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.451 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.451 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:53.451 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:53.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:34:53.452 00:34:53.452 --- 10.0.0.2 ping statistics --- 00:34:53.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.452 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:34:53.452 00:34:53.452 --- 10.0.0.1 ping statistics --- 00:34:53.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.452 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2259734 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2259734 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2259734 ']' 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:53.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.452 18:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:53.452 [2024-11-19 18:33:54.307964] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:53.452 [2024-11-19 18:33:54.309506] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:34:53.452 [2024-11-19 18:33:54.309575] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.452 [2024-11-19 18:33:54.412141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:53.452 [2024-11-19 18:33:54.464454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.452 [2024-11-19 18:33:54.464513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.452 [2024-11-19 18:33:54.464522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.452 [2024-11-19 18:33:54.464530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.452 [2024-11-19 18:33:54.464536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.452 [2024-11-19 18:33:54.466949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.452 [2024-11-19 18:33:54.467109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.452 [2024-11-19 18:33:54.467274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:53.452 [2024-11-19 18:33:54.467427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.452 [2024-11-19 18:33:54.544152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:53.452 [2024-11-19 18:33:54.545035] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:53.452 [2024-11-19 18:33:54.545374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:53.452 [2024-11-19 18:33:54.545878] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:53.452 [2024-11-19 18:33:54.545929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:53.713 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.713 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:53.713 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:53.713 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:53.713 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:53.713 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.713 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:53.973 [2024-11-19 18:33:55.344381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.973 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:54.260 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:54.260 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:54.521 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:54.521 18:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:54.781 18:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:54.781 18:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:54.781 18:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:54.781 18:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:55.041 18:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.303 18:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:55.303 18:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.563 18:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:55.563 18:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:55.825 18:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:55.825 18:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:55.825 18:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:56.086 18:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:56.086 18:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:56.348 18:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:56.348 18:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:56.348 18:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.609 [2024-11-19 18:33:57.936346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.609 18:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:56.869 18:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:57.130 18:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:57.390 18:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:57.390 18:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:57.390 18:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:57.390 18:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:57.390 18:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:57.390 18:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:59.306 18:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:59.306 18:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:59.306 18:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:59.578 18:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:59.578 18:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:59.578 18:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:59.578 18:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:59.578 [global] 00:34:59.578 thread=1 00:34:59.578 invalidate=1 00:34:59.578 rw=write 00:34:59.578 time_based=1 00:34:59.578 runtime=1 00:34:59.578 ioengine=libaio 00:34:59.578 direct=1 00:34:59.578 bs=4096 00:34:59.578 iodepth=1 00:34:59.578 norandommap=0 00:34:59.578 numjobs=1 00:34:59.578 00:34:59.578 verify_dump=1 00:34:59.578 verify_backlog=512 00:34:59.578 verify_state_save=0 00:34:59.578 do_verify=1 00:34:59.578 verify=crc32c-intel 00:34:59.578 [job0] 00:34:59.578 filename=/dev/nvme0n1 00:34:59.578 [job1] 00:34:59.578 filename=/dev/nvme0n2 00:34:59.578 [job2] 00:34:59.578 filename=/dev/nvme0n3 00:34:59.578 [job3] 00:34:59.578 filename=/dev/nvme0n4 00:34:59.578 Could not set queue depth (nvme0n1) 00:34:59.578 Could not set queue depth (nvme0n2) 00:34:59.578 Could not set queue depth (nvme0n3) 00:34:59.578 Could not set queue depth (nvme0n4) 00:34:59.838 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:59.838 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:59.838 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:59.838 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:59.838 fio-3.35 00:34:59.838 Starting 4 threads 00:35:01.220 00:35:01.220 job0: (groupid=0, jobs=1): err= 0: pid=2261295: Tue Nov 19 18:34:02 2024 00:35:01.220 read: IOPS=512, BW=2048KiB/s (2097kB/s)(2048KiB/1000msec) 00:35:01.220 slat (nsec): min=26604, max=59882, avg=27710.85, stdev=3323.27 00:35:01.220 clat (usec): min=622, max=1448, avg=1067.96, stdev=120.71 00:35:01.220 lat (usec): min=649, max=1475, avg=1095.67, stdev=120.59 00:35:01.220 clat percentiles (usec): 00:35:01.220 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 955], 00:35:01.220 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1106], 00:35:01.220 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:35:01.220 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1450], 99.95th=[ 1450], 00:35:01.220 | 99.99th=[ 1450] 00:35:01.220 write: IOPS=553, BW=2212KiB/s (2265kB/s)(2212KiB/1000msec); 0 zone resets 00:35:01.220 slat (usec): min=9, max=40068, avg=147.20, stdev=1977.41 00:35:01.220 clat (usec): min=275, max=1068, avg=624.70, stdev=128.54 00:35:01.220 lat (usec): min=286, max=40840, avg=771.90, stdev=1981.79 00:35:01.220 clat percentiles (usec): 00:35:01.220 | 1.00th=[ 351], 5.00th=[ 412], 10.00th=[ 469], 20.00th=[ 519], 00:35:01.220 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:35:01.220 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 791], 95.00th=[ 848], 00:35:01.220 | 99.00th=[ 955], 99.50th=[ 1004], 99.90th=[ 1074], 99.95th=[ 1074], 00:35:01.220 | 99.99th=[ 1074] 00:35:01.220 bw ( KiB/s): min= 4096, max= 4096, per=36.63%, avg=4096.00, stdev= 0.00, samples=1 00:35:01.220 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:01.220 lat (usec) : 500=7.70%, 750=35.59%, 1000=22.82% 00:35:01.220 lat (msec) : 2=33.90% 00:35:01.220 cpu : usr=1.50%, sys=5.00%, ctx=1069, majf=0, minf=1 00:35:01.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.220 issued rwts: total=512,553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.220 job1: (groupid=0, jobs=1): err= 0: pid=2261316: Tue Nov 19 18:34:02 2024 00:35:01.220 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:01.220 slat (nsec): min=24957, max=44561, avg=25903.27, stdev=2147.96 00:35:01.220 clat (usec): min=614, max=1370, avg=1054.25, stdev=127.26 00:35:01.220 lat (usec): min=639, max=1395, avg=1080.15, stdev=127.03 00:35:01.220 clat percentiles (usec): 00:35:01.220 | 1.00th=[ 758], 5.00th=[ 832], 10.00th=[ 889], 20.00th=[ 938], 00:35:01.220 | 30.00th=[ 979], 40.00th=[ 1029], 50.00th=[ 1074], 60.00th=[ 1106], 00:35:01.220 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:35:01.220 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1369], 99.95th=[ 1369], 00:35:01.220 | 99.99th=[ 1369] 00:35:01.220 write: IOPS=691, BW=2765KiB/s (2832kB/s)(2768KiB/1001msec); 0 zone resets 00:35:01.220 slat (nsec): min=9459, max=52680, avg=30625.77, stdev=8419.20 00:35:01.220 clat (usec): min=207, max=992, avg=601.62, stdev=117.01 00:35:01.220 lat (usec): min=220, max=1025, avg=632.25, stdev=119.14 00:35:01.220 clat percentiles (usec): 00:35:01.220 | 1.00th=[ 338], 5.00th=[ 416], 10.00th=[ 461], 20.00th=[ 502], 00:35:01.221 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:35:01.221 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 807], 00:35:01.221 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 996], 99.95th=[ 996], 00:35:01.221 | 99.99th=[ 996] 00:35:01.221 bw ( KiB/s): min= 4096, max= 4096, per=36.63%, avg=4096.00, stdev= 0.00, samples=1 00:35:01.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:01.221 lat (usec) : 250=0.08%, 500=11.46%, 750=40.37%, 1000=20.02% 00:35:01.221 lat (msec) : 2=28.07% 00:35:01.221 cpu : usr=2.00%, sys=3.30%, ctx=1204, majf=0, minf=2 00:35:01.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.221 issued rwts: total=512,692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.221 job2: (groupid=0, jobs=1): err= 0: pid=2261319: Tue Nov 19 18:34:02 2024 00:35:01.221 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:01.221 slat (nsec): min=4180, max=49865, avg=10752.06, stdev=9114.44 00:35:01.221 clat (usec): min=590, max=1313, avg=865.07, stdev=124.43 00:35:01.221 lat (usec): min=595, max=1355, avg=875.82, stdev=131.77 00:35:01.221 clat percentiles (usec): 00:35:01.221 | 1.00th=[ 652], 5.00th=[ 709], 10.00th=[ 750], 20.00th=[ 775], 00:35:01.221 | 30.00th=[ 799], 40.00th=[ 816], 50.00th=[ 832], 60.00th=[ 848], 00:35:01.221 | 70.00th=[ 881], 80.00th=[ 963], 90.00th=[ 1045], 95.00th=[ 1123], 00:35:01.221 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1319], 99.95th=[ 1319], 00:35:01.221 | 99.99th=[ 1319] 00:35:01.221 write: IOPS=975, BW=3900KiB/s (3994kB/s)(3904KiB/1001msec); 0 zone resets 00:35:01.221 slat (usec): min=4, max=1591, avg=19.99, stdev=52.24 00:35:01.221 clat (usec): min=196, max=907, avg=539.99, stdev=144.57 00:35:01.221 lat (usec): min=201, max=2223, avg=559.98, stdev=163.03 00:35:01.221 clat percentiles (usec): 00:35:01.221 | 1.00th=[ 253], 5.00th=[ 310], 10.00th=[ 351], 20.00th=[ 400], 00:35:01.221 | 30.00th=[ 449], 40.00th=[ 490], 50.00th=[ 545], 60.00th=[ 594], 00:35:01.221 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 725], 95.00th=[ 766], 00:35:01.221 | 99.00th=[ 865], 99.50th=[ 873], 99.90th=[ 906], 99.95th=[ 906], 00:35:01.221 | 99.99th=[ 906] 00:35:01.221 bw ( KiB/s): min= 4096, max= 4096, per=36.63%, avg=4096.00, stdev= 0.00, samples=1 00:35:01.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:01.221 lat (usec) : 250=0.60%, 500=27.08%, 750=37.37%, 1000=29.50% 00:35:01.221 lat (msec) : 2=5.44% 00:35:01.221 cpu : usr=1.30%, sys=2.20%, ctx=1492, majf=0, minf=1 00:35:01.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.221 issued rwts: total=512,976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.221 job3: (groupid=0, jobs=1): err= 0: pid=2261320: Tue Nov 19 18:34:02 2024 00:35:01.221 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:01.221 slat (nsec): min=25539, max=62324, avg=29313.41, stdev=3870.28 00:35:01.221 clat (usec): min=740, max=1452, avg=1105.16, stdev=100.43 00:35:01.221 lat (usec): min=767, max=1478, avg=1134.48, stdev=100.03 00:35:01.221 clat percentiles (usec): 00:35:01.221 | 1.00th=[ 824], 5.00th=[ 914], 10.00th=[ 971], 20.00th=[ 1029], 00:35:01.221 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1139], 00:35:01.221 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:35:01.221 | 99.00th=[ 1319], 99.50th=[ 1401], 99.90th=[ 1450], 99.95th=[ 1450], 00:35:01.221 | 99.99th=[ 1450] 00:35:01.221 write: IOPS=576, BW=2306KiB/s (2361kB/s)(2308KiB/1001msec); 0 zone resets 00:35:01.221 slat (nsec): min=9452, max=82943, avg=31935.91, stdev=9425.92 00:35:01.221 clat (usec): min=253, max=1020, avg=678.13, stdev=139.00 00:35:01.221 lat (usec): min=264, max=1075, avg=710.07, stdev=142.40 00:35:01.221 clat percentiles (usec): 00:35:01.221 | 1.00th=[ 318], 5.00th=[ 429], 10.00th=[ 490], 20.00th=[ 570], 00:35:01.221 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 725], 00:35:01.221 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 848], 95.00th=[ 898], 00:35:01.221 | 99.00th=[ 947], 99.50th=[ 1004], 99.90th=[ 1020], 99.95th=[ 1020], 00:35:01.221 | 99.99th=[ 1020] 00:35:01.221 bw ( KiB/s): min= 4096, max= 4096, per=36.63%, avg=4096.00, stdev= 0.00, samples=1 00:35:01.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:01.221 lat (usec) : 500=6.15%, 750=30.39%, 1000=22.50% 00:35:01.221 lat (msec) : 2=40.96% 00:35:01.221 cpu : usr=2.00%, sys=4.40%, ctx=1089, majf=0, minf=1 00:35:01.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.221 issued rwts: total=512,577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:01.221 00:35:01.221 Run status group 0 (all jobs): 00:35:01.221 READ: bw=8184KiB/s (8380kB/s), 2046KiB/s-2048KiB/s (2095kB/s-2097kB/s), io=8192KiB (8389kB), run=1000-1001msec 00:35:01.221 WRITE: bw=10.9MiB/s (11.4MB/s), 2212KiB/s-3900KiB/s (2265kB/s-3994kB/s), io=10.9MiB (11.5MB), run=1000-1001msec 00:35:01.221 00:35:01.221 Disk stats (read/write): 00:35:01.221 nvme0n1: ios=390/512, merge=0/0, ticks=1201/230, in_queue=1431, util=85.47% 00:35:01.221 nvme0n2: ios=459/512, merge=0/0, ticks=839/291, in_queue=1130, util=88.76% 00:35:01.221 nvme0n3: ios=578/628, merge=0/0, ticks=650/310, in_queue=960, util=97.09% 00:35:01.221 nvme0n4: ios=410/512, merge=0/0, ticks=910/324, in_queue=1234, util=97.82% 00:35:01.221 18:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:01.221 [global] 00:35:01.221 thread=1 00:35:01.221 invalidate=1 00:35:01.221 rw=randwrite 00:35:01.221 time_based=1 00:35:01.221 runtime=1 00:35:01.221 ioengine=libaio 00:35:01.221 direct=1 00:35:01.221 bs=4096 00:35:01.221 iodepth=1 00:35:01.221 norandommap=0 00:35:01.221 numjobs=1 00:35:01.221 00:35:01.221 verify_dump=1 00:35:01.221 verify_backlog=512 00:35:01.221 verify_state_save=0 00:35:01.221 do_verify=1 00:35:01.221 verify=crc32c-intel 00:35:01.221 [job0] 00:35:01.221 filename=/dev/nvme0n1 00:35:01.221 [job1] 00:35:01.221 filename=/dev/nvme0n2 00:35:01.221 [job2] 00:35:01.221 filename=/dev/nvme0n3 00:35:01.221 [job3] 00:35:01.221 filename=/dev/nvme0n4 00:35:01.221 Could not set queue depth (nvme0n1) 00:35:01.221 Could not set queue depth (nvme0n2) 00:35:01.221 Could not set queue depth (nvme0n3) 00:35:01.221 Could not set queue depth (nvme0n4) 00:35:01.482 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.482 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.482 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.482 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:01.482 fio-3.35 00:35:01.482 Starting 4 threads 00:35:02.871 00:35:02.871 job0: (groupid=0, jobs=1): err= 0: pid=2261759: Tue Nov 19 18:34:04 2024 00:35:02.871 read: IOPS=680, BW=2721KiB/s (2787kB/s)(2724KiB/1001msec) 00:35:02.871 slat (nsec): min=6482, max=48729, avg=26017.22, stdev=7663.07 00:35:02.871 clat (usec): min=285, max=961, avg=724.14, stdev=111.60 00:35:02.871 lat (usec): min=313, max=988, avg=750.16, stdev=113.71 00:35:02.871 clat percentiles (usec): 00:35:02.871 | 1.00th=[ 383], 5.00th=[ 494], 10.00th=[ 586], 20.00th=[ 652], 00:35:02.871 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 766], 00:35:02.871 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 848], 95.00th=[ 873], 00:35:02.871 | 99.00th=[ 930], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 963], 00:35:02.871 | 99.99th=[ 963] 00:35:02.871 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:35:02.871 slat (nsec): min=9225, max=66710, avg=31383.68, stdev=9676.19 00:35:02.871 clat (usec): min=135, max=974, avg=433.28, stdev=140.11 00:35:02.871 lat (usec): min=169, max=1009, avg=464.66, stdev=141.53 00:35:02.871 clat percentiles (usec): 00:35:02.871 | 1.00th=[ 198], 5.00th=[ 231], 10.00th=[ 273], 20.00th=[ 302], 00:35:02.871 | 30.00th=[ 330], 40.00th=[ 392], 50.00th=[ 424], 60.00th=[ 457], 00:35:02.871 | 70.00th=[ 502], 80.00th=[ 553], 90.00th=[ 627], 95.00th=[ 693], 00:35:02.871 | 99.00th=[ 775], 99.50th=[ 824], 99.90th=[ 889], 99.95th=[ 971], 00:35:02.871 | 99.99th=[ 971] 00:35:02.871 bw ( KiB/s): min= 4096, max= 4096, per=38.03%, avg=4096.00, stdev= 0.00, samples=1 00:35:02.871 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:02.871 lat (usec) : 250=4.81%, 500=39.24%, 750=36.07%, 1000=19.88% 00:35:02.871 cpu : usr=3.70%, sys=6.40%, ctx=1708, majf=0, minf=1 00:35:02.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.871 issued rwts: total=681,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.871 job1: (groupid=0, jobs=1): err= 0: pid=2261786: Tue Nov 19 18:34:04 2024 00:35:02.871 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:02.871 slat (nsec): min=9253, max=57248, avg=27187.36, stdev=3408.52 00:35:02.871 clat (usec): min=477, max=1331, avg=1037.31, stdev=127.17 00:35:02.871 lat (usec): min=505, max=1358, avg=1064.50, stdev=126.75 00:35:02.871 clat percentiles (usec): 00:35:02.871 | 1.00th=[ 709], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 947], 00:35:02.871 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1045], 60.00th=[ 1074], 00:35:02.871 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1221], 00:35:02.871 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:35:02.871 | 99.99th=[ 1336] 00:35:02.871 write: IOPS=738, BW=2953KiB/s (3024kB/s)(2956KiB/1001msec); 0 zone resets 00:35:02.871 slat (nsec): min=8856, max=57089, avg=30416.76, stdev=8558.00 00:35:02.871 clat (usec): min=174, max=844, avg=571.61, stdev=121.83 00:35:02.871 lat (usec): min=184, max=893, avg=602.03, stdev=124.95 00:35:02.871 clat percentiles (usec): 00:35:02.871 | 1.00th=[ 273], 5.00th=[ 351], 10.00th=[ 412], 20.00th=[ 461], 00:35:02.871 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:35:02.871 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 766], 00:35:02.871 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 848], 99.95th=[ 848], 00:35:02.871 | 99.99th=[ 848] 00:35:02.871 bw ( KiB/s): min= 4096, max= 4096, per=38.03%, avg=4096.00, stdev= 0.00, samples=1 00:35:02.871 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:02.871 lat (usec) : 250=0.32%, 500=15.83%, 750=40.45%, 1000=16.31% 00:35:02.871 lat (msec) : 2=27.10% 00:35:02.871 cpu : usr=2.30%, sys=5.30%, ctx=1251, majf=0, minf=2 00:35:02.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.871 issued rwts: total=512,739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.871 job2: (groupid=0, jobs=1): err= 0: pid=2261816: Tue Nov 19 18:34:04 2024 00:35:02.871 read: IOPS=20, BW=81.2KiB/s (83.1kB/s)(84.0KiB/1035msec) 00:35:02.871 slat (nsec): min=12116, max=26218, avg=24957.71, stdev=2954.27 00:35:02.871 clat (usec): min=713, max=41303, avg=39069.12, stdev=8789.05 00:35:02.871 lat (usec): min=739, max=41315, avg=39094.08, stdev=8788.73 00:35:02.871 clat percentiles (usec): 00:35:02.871 | 1.00th=[ 717], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:02.872 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:02.872 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:02.872 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:02.872 | 99.99th=[41157] 00:35:02.872 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:35:02.872 slat (nsec): min=9327, max=52221, avg=26968.26, stdev=10006.71 00:35:02.872 clat (usec): min=115, max=783, avg=382.89, stdev=134.28 00:35:02.872 lat (usec): min=125, max=815, avg=409.86, stdev=139.23 00:35:02.872 clat percentiles (usec): 00:35:02.872 | 1.00th=[ 122], 5.00th=[ 133], 10.00th=[ 206], 20.00th=[ 285], 00:35:02.872 | 30.00th=[ 318], 40.00th=[ 338], 50.00th=[ 371], 60.00th=[ 420], 00:35:02.872 | 70.00th=[ 457], 80.00th=[ 498], 90.00th=[ 553], 95.00th=[ 603], 00:35:02.872 | 99.00th=[ 701], 99.50th=[ 734], 99.90th=[ 783], 99.95th=[ 783], 00:35:02.872 | 99.99th=[ 783] 00:35:02.872 bw ( KiB/s): min= 4096, max= 4096, per=38.03%, avg=4096.00, stdev= 0.00, samples=1 00:35:02.872 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:02.872 lat (usec) : 250=14.07%, 500=63.04%, 750=18.95%, 1000=0.19% 00:35:02.872 lat (msec) : 50=3.75% 00:35:02.872 cpu : usr=0.77%, sys=1.35%, ctx=533, majf=0, minf=2 00:35:02.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.872 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.872 job3: (groupid=0, jobs=1): err= 0: pid=2261828: Tue Nov 19 18:34:04 2024 00:35:02.872 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1016msec) 00:35:02.872 slat (nsec): min=6213, max=9315, avg=8113.20, stdev=730.86 00:35:02.872 clat (usec): min=40913, max=41169, avg=40985.40, stdev=66.70 00:35:02.872 lat (usec): min=40921, max=41176, avg=40993.51, stdev=66.72 00:35:02.872 clat percentiles (usec): 00:35:02.872 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:02.872 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:02.872 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:02.872 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:02.872 | 99.99th=[41157] 00:35:02.872 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:35:02.872 slat (nsec): min=4364, max=54182, avg=6233.97, stdev=3294.46 00:35:02.872 clat (usec): min=137, max=754, avg=374.02, stdev=105.75 00:35:02.872 lat (usec): min=143, max=789, avg=380.25, stdev=106.66 00:35:02.872 clat percentiles (usec): 00:35:02.872 | 1.00th=[ 202], 5.00th=[ 227], 10.00th=[ 241], 20.00th=[ 265], 00:35:02.872 | 30.00th=[ 306], 40.00th=[ 351], 50.00th=[ 375], 60.00th=[ 400], 00:35:02.872 | 70.00th=[ 420], 80.00th=[ 465], 90.00th=[ 510], 95.00th=[ 553], 00:35:02.872 | 99.00th=[ 660], 99.50th=[ 701], 99.90th=[ 758], 99.95th=[ 758], 00:35:02.872 | 99.99th=[ 758] 00:35:02.872 bw ( KiB/s): min= 4096, max= 4096, per=38.03%, avg=4096.00, stdev= 0.00, samples=1 00:35:02.872 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:02.872 lat (usec) : 250=13.35%, 500=70.68%, 750=12.03%, 1000=0.19% 00:35:02.872 lat (msec) : 50=3.76% 00:35:02.872 cpu : usr=0.00%, sys=0.49%, ctx=533, majf=0, minf=2 00:35:02.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:02.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:02.872 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:02.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:02.872 00:35:02.872 Run status group 0 (all jobs): 00:35:02.872 READ: bw=4769KiB/s (4884kB/s), 78.7KiB/s-2721KiB/s (80.6kB/s-2787kB/s), io=4936KiB (5054kB), run=1001-1035msec 00:35:02.872 WRITE: bw=10.5MiB/s (11.0MB/s), 1979KiB/s-4092KiB/s (2026kB/s-4190kB/s), io=10.9MiB (11.4MB), run=1001-1035msec 00:35:02.872 00:35:02.872 Disk stats (read/write): 00:35:02.872 nvme0n1: ios=565/812, merge=0/0, ticks=708/283, in_queue=991, util=99.80% 00:35:02.872 nvme0n2: ios=458/512, merge=0/0, ticks=470/222, in_queue=692, util=82.54% 00:35:02.872 nvme0n3: ios=69/512, merge=0/0, ticks=889/180, in_queue=1069, util=92.20% 00:35:02.872 nvme0n4: ios=14/512, merge=0/0, ticks=574/185, in_queue=759, util=88.89% 00:35:02.872 18:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:02.872 [global] 00:35:02.872 thread=1 00:35:02.872 invalidate=1 00:35:02.872 rw=write 00:35:02.872 time_based=1 00:35:02.872 runtime=1 00:35:02.872 ioengine=libaio 00:35:02.872 direct=1 00:35:02.872 bs=4096 00:35:02.872 iodepth=128 00:35:02.872 norandommap=0 00:35:02.872 numjobs=1 00:35:02.872 00:35:02.872 verify_dump=1 00:35:02.872 verify_backlog=512 00:35:02.872 verify_state_save=0 00:35:02.872 do_verify=1 00:35:02.872 verify=crc32c-intel 00:35:02.872 [job0] 00:35:02.872 filename=/dev/nvme0n1 00:35:02.872 [job1] 00:35:02.872 filename=/dev/nvme0n2 00:35:02.872 [job2] 00:35:02.872 filename=/dev/nvme0n3 00:35:02.872 [job3] 00:35:02.872 filename=/dev/nvme0n4 00:35:03.160 Could not set queue depth (nvme0n1) 00:35:03.160 Could not set queue depth (nvme0n2) 00:35:03.160 Could not set queue depth (nvme0n3) 00:35:03.160 Could not set queue depth (nvme0n4) 00:35:03.427 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:03.427 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:03.427 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:03.427 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:03.427 fio-3.35 00:35:03.427 Starting 4 threads 00:35:04.518 00:35:04.518 job0: (groupid=0, jobs=1): err= 0: pid=2262238: Tue Nov 19 18:34:05 2024 00:35:04.518 read: IOPS=4784, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1006msec) 00:35:04.518 slat (nsec): min=906, max=11139k, avg=94084.25, stdev=692546.89 00:35:04.518 clat (usec): min=1443, max=49833, avg=11730.27, stdev=6028.84 00:35:04.518 lat (usec): min=4625, max=49841, avg=11824.36, stdev=6092.71 00:35:04.518 clat percentiles (usec): 00:35:04.518 | 1.00th=[ 5473], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7373], 00:35:04.518 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10290], 00:35:04.518 | 70.00th=[12387], 80.00th=[17957], 90.00th=[19530], 95.00th=[23200], 00:35:04.518 | 99.00th=[29230], 99.50th=[40109], 99.90th=[50070], 99.95th=[50070], 00:35:04.518 | 99.99th=[50070] 00:35:04.518 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:35:04.518 slat (nsec): min=1549, max=11881k, avg=101281.32, stdev=688259.68 00:35:04.518 clat (usec): min=739, max=68947, avg=13902.22, stdev=9928.82 00:35:04.518 lat (usec): min=3004, max=68956, avg=14003.50, stdev=9992.64 00:35:04.518 clat percentiles (usec): 00:35:04.518 | 1.00th=[ 3785], 5.00th=[ 5211], 10.00th=[ 6587], 20.00th=[ 7373], 00:35:04.518 | 30.00th=[ 7635], 40.00th=[ 8717], 50.00th=[11731], 60.00th=[12780], 00:35:04.518 | 70.00th=[17171], 80.00th=[18482], 90.00th=[21627], 95.00th=[31327], 00:35:04.518 | 99.00th=[56886], 99.50th=[62653], 99.90th=[68682], 99.95th=[68682], 00:35:04.518 | 99.99th=[68682] 00:35:04.518 bw ( KiB/s): min=16904, max=24056, per=23.96%, avg=20480.00, stdev=5057.23, samples=2 00:35:04.518 iops : min= 4226, max= 6014, avg=5120.00, stdev=1264.31, samples=2 00:35:04.518 lat (usec) : 750=0.01% 00:35:04.518 lat (msec) : 2=0.01%, 4=0.88%, 10=51.20%, 20=36.19%, 50=10.36% 00:35:04.518 lat (msec) : 100=1.35% 00:35:04.518 cpu : usr=3.58%, sys=5.77%, ctx=371, majf=0, minf=1 00:35:04.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:04.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:04.518 issued rwts: total=4813,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:04.518 job1: (groupid=0, jobs=1): err= 0: pid=2262266: Tue Nov 19 18:34:05 2024 00:35:04.518 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:35:04.519 slat (nsec): min=908, max=16037k, avg=97631.77, stdev=783636.28 00:35:04.519 clat (usec): min=3527, max=38112, avg=13168.21, stdev=7535.20 00:35:04.519 lat (usec): min=4239, max=38138, avg=13265.84, stdev=7591.32 00:35:04.519 clat percentiles (usec): 00:35:04.519 | 1.00th=[ 4817], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7439], 00:35:04.519 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 9372], 60.00th=[10552], 00:35:04.519 | 70.00th=[17695], 80.00th=[20055], 90.00th=[25297], 95.00th=[28443], 00:35:04.519 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:35:04.519 | 99.99th=[38011] 00:35:04.519 write: IOPS=4807, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1008msec); 0 zone resets 00:35:04.519 slat (nsec): min=1569, max=18787k, avg=108663.67, stdev=750788.92 00:35:04.519 clat (usec): min=3179, max=63016, avg=13774.76, stdev=10508.02 00:35:04.519 lat (usec): min=3294, max=63025, avg=13883.42, stdev=10589.69 00:35:04.519 clat percentiles (usec): 00:35:04.519 | 1.00th=[ 5211], 5.00th=[ 5997], 10.00th=[ 6128], 20.00th=[ 7177], 00:35:04.519 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[10290], 60.00th=[12256], 00:35:04.519 | 70.00th=[14353], 80.00th=[18220], 90.00th=[26346], 95.00th=[35390], 00:35:04.519 | 99.00th=[60031], 99.50th=[61080], 99.90th=[63177], 99.95th=[63177], 00:35:04.519 | 99.99th=[63177] 00:35:04.519 bw ( KiB/s): min=18816, max=18928, per=22.08%, avg=18872.00, stdev=79.20, samples=2 00:35:04.519 iops : min= 4704, max= 4732, avg=4718.00, stdev=19.80, samples=2 00:35:04.519 lat (msec) : 4=0.10%, 10=52.05%, 20=28.82%, 50=17.78%, 100=1.25% 00:35:04.519 cpu : usr=3.87%, sys=5.36%, ctx=324, majf=0, minf=1 00:35:04.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:04.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:04.519 issued rwts: total=4608,4846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:04.519 job2: (groupid=0, jobs=1): err= 0: pid=2262296: Tue Nov 19 18:34:05 2024 00:35:04.519 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:35:04.519 slat (nsec): min=1037, max=9406.0k, avg=83819.21, stdev=603024.15 00:35:04.519 clat (usec): min=3257, max=38396, avg=10240.98, stdev=4091.09 00:35:04.519 lat (usec): min=3266, max=38399, avg=10324.80, stdev=4145.71 00:35:04.519 clat percentiles (usec): 00:35:04.519 | 1.00th=[ 5276], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7570], 00:35:04.519 | 30.00th=[ 8094], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:35:04.519 | 70.00th=[10421], 80.00th=[11863], 90.00th=[14877], 95.00th=[16909], 00:35:04.519 | 99.00th=[28705], 99.50th=[33162], 99.90th=[38536], 99.95th=[38536], 00:35:04.519 | 99.99th=[38536] 00:35:04.519 write: IOPS=4368, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1008msec); 0 zone resets 00:35:04.519 slat (nsec): min=1757, max=46246k, avg=144177.19, stdev=1578273.50 00:35:04.519 clat (usec): min=1933, max=202690, avg=16020.12, stdev=17135.06 00:35:04.519 lat (usec): min=1946, max=202700, avg=16164.29, stdev=17381.70 00:35:04.519 clat percentiles (msec): 00:35:04.519 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:35:04.519 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 12], 60.00th=[ 13], 00:35:04.519 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 29], 95.00th=[ 44], 00:35:04.519 | 99.00th=[ 85], 99.50th=[ 131], 99.90th=[ 203], 99.95th=[ 203], 00:35:04.519 | 99.99th=[ 203] 00:35:04.519 bw ( KiB/s): min=12696, max=21504, per=20.01%, avg=17100.00, stdev=6228.20, samples=2 00:35:04.519 iops : min= 3174, max= 5376, avg=4275.00, stdev=1557.05, samples=2 00:35:04.519 lat (msec) : 2=0.07%, 4=0.85%, 10=54.27%, 20=28.78%, 50=15.28% 00:35:04.519 lat (msec) : 100=0.38%, 250=0.38% 00:35:04.519 cpu : usr=3.57%, sys=4.67%, ctx=372, majf=0, minf=2 00:35:04.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:04.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:04.519 issued rwts: total=4096,4403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:04.519 job3: (groupid=0, jobs=1): err= 0: pid=2262308: Tue Nov 19 18:34:05 2024 00:35:04.519 read: IOPS=6760, BW=26.4MiB/s (27.7MB/s)(26.6MiB/1006msec) 00:35:04.519 slat (nsec): min=918, max=8284.6k, avg=66473.04, stdev=504387.79 00:35:04.519 clat (usec): min=2071, max=30137, avg=9295.27, stdev=3487.50 00:35:04.519 lat (usec): min=2104, max=30148, avg=9361.75, stdev=3525.32 00:35:04.519 clat percentiles (usec): 00:35:04.519 | 1.00th=[ 2999], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6980], 00:35:04.519 | 30.00th=[ 7373], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9503], 00:35:04.519 | 70.00th=[10028], 80.00th=[10814], 90.00th=[13042], 95.00th=[16319], 00:35:04.519 | 99.00th=[22152], 99.50th=[25822], 99.90th=[29492], 99.95th=[30016], 00:35:04.519 | 99.99th=[30016] 00:35:04.519 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:35:04.519 slat (nsec): min=1548, max=9362.1k, avg=61823.13, stdev=412087.08 00:35:04.519 clat (usec): min=613, max=30082, avg=8997.37, stdev=4596.31 00:35:04.519 lat (usec): min=666, max=30084, avg=9059.19, stdev=4620.60 00:35:04.519 clat percentiles (usec): 00:35:04.519 | 1.00th=[ 1336], 5.00th=[ 3720], 10.00th=[ 4424], 20.00th=[ 5735], 00:35:04.519 | 30.00th=[ 6587], 40.00th=[ 7242], 50.00th=[ 8356], 60.00th=[ 8979], 00:35:04.519 | 70.00th=[ 9634], 80.00th=[11731], 90.00th=[14222], 95.00th=[19792], 00:35:04.519 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:35:04.519 | 99.99th=[30016] 00:35:04.519 bw ( KiB/s): min=25104, max=32240, per=33.55%, avg=28672.00, stdev=5045.91, samples=2 00:35:04.519 iops : min= 6276, max= 8060, avg=7168.00, stdev=1261.48, samples=2 00:35:04.519 lat (usec) : 750=0.02%, 1000=0.02% 00:35:04.519 lat (msec) : 2=0.80%, 4=3.90%, 10=66.76%, 20=25.20%, 50=3.29% 00:35:04.519 cpu : usr=4.58%, sys=7.96%, ctx=515, majf=0, minf=2 00:35:04.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:35:04.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:04.519 issued rwts: total=6801,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:04.519 00:35:04.519 Run status group 0 (all jobs): 00:35:04.519 READ: bw=78.7MiB/s (82.6MB/s), 15.9MiB/s-26.4MiB/s (16.6MB/s-27.7MB/s), io=79.4MiB (83.2MB), run=1006-1008msec 00:35:04.519 WRITE: bw=83.5MiB/s (87.5MB/s), 17.1MiB/s-27.8MiB/s (17.9MB/s-29.2MB/s), io=84.1MiB (88.2MB), run=1006-1008msec 00:35:04.519 00:35:04.519 Disk stats (read/write): 00:35:04.519 nvme0n1: ios=3634/3929, merge=0/0, ticks=25153/30459, in_queue=55612, util=90.28% 00:35:04.519 nvme0n2: ios=3607/3847, merge=0/0, ticks=24333/21705, in_queue=46038, util=94.62% 00:35:04.519 nvme0n3: ios=2721/3072, merge=0/0, ticks=28199/38999, in_queue=67198, util=98.67% 00:35:04.519 nvme0n4: ios=5632/5887, merge=0/0, ticks=39501/42354, in_queue=81855, util=88.84% 00:35:04.519 18:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:04.519 [global] 00:35:04.519 thread=1 00:35:04.519 invalidate=1 00:35:04.519 rw=randwrite 00:35:04.519 time_based=1 00:35:04.519 runtime=1 00:35:04.519 ioengine=libaio 00:35:04.519 direct=1 00:35:04.519 bs=4096 00:35:04.519 iodepth=128 00:35:04.519 norandommap=0 00:35:04.519 numjobs=1 00:35:04.519 00:35:04.519 verify_dump=1 00:35:04.519 verify_backlog=512 00:35:04.519 verify_state_save=0 00:35:04.519 do_verify=1 00:35:04.519 verify=crc32c-intel 00:35:04.519 [job0] 00:35:04.519 filename=/dev/nvme0n1 00:35:04.519 [job1] 00:35:04.519 filename=/dev/nvme0n2 00:35:04.519 [job2] 00:35:04.519 filename=/dev/nvme0n3 00:35:04.519 [job3] 00:35:04.519 filename=/dev/nvme0n4 00:35:04.819 Could not set queue depth (nvme0n1) 00:35:04.819 Could not set queue depth (nvme0n2) 00:35:04.819 Could not set queue depth (nvme0n3) 00:35:04.819 Could not set queue depth (nvme0n4) 00:35:05.092 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:05.092 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:05.092 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:05.092 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:05.092 fio-3.35 00:35:05.092 Starting 4 threads 00:35:06.513 00:35:06.513 job0: (groupid=0, jobs=1): err= 0: pid=2262713: Tue Nov 19 18:34:07 2024 00:35:06.513 read: IOPS=5141, BW=20.1MiB/s (21.1MB/s)(20.3MiB/1011msec) 00:35:06.513 slat (nsec): min=1016, max=10764k, avg=93425.02, stdev=648835.58 00:35:06.513 clat (usec): min=4057, max=35922, avg=11160.86, stdev=5194.08 00:35:06.513 lat (usec): min=4063, max=35930, avg=11254.29, stdev=5240.44 00:35:06.513 clat percentiles (usec): 00:35:06.513 | 1.00th=[ 5997], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 7504], 00:35:06.513 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[10814], 00:35:06.513 | 70.00th=[12256], 80.00th=[14353], 90.00th=[17695], 95.00th=[19792], 00:35:06.513 | 99.00th=[33162], 99.50th=[34866], 99.90th=[35390], 99.95th=[35914], 00:35:06.513 | 99.99th=[35914] 00:35:06.513 write: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec); 0 zone resets 00:35:06.513 slat (nsec): min=1607, max=10351k, avg=85580.30, stdev=505994.55 00:35:06.513 clat (usec): min=1174, max=40260, avg=12365.84, stdev=6693.71 00:35:06.513 lat (usec): min=1185, max=40267, avg=12451.42, stdev=6734.67 00:35:06.513 clat percentiles (usec): 00:35:06.513 | 1.00th=[ 3916], 5.00th=[ 4948], 10.00th=[ 5211], 20.00th=[ 6521], 00:35:06.513 | 30.00th=[ 7570], 40.00th=[ 8455], 50.00th=[11338], 60.00th=[13042], 00:35:06.513 | 70.00th=[15401], 80.00th=[18482], 90.00th=[20055], 95.00th=[23200], 00:35:06.513 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:35:06.513 | 99.99th=[40109] 00:35:06.513 bw ( KiB/s): min=20552, max=24112, per=24.47%, avg=22332.00, stdev=2517.30, samples=2 00:35:06.513 iops : min= 5138, max= 6028, avg=5583.00, stdev=629.33, samples=2 00:35:06.513 lat (msec) : 2=0.02%, 4=0.55%, 10=50.08%, 20=41.75%, 50=7.59% 00:35:06.513 cpu : usr=3.66%, sys=5.84%, ctx=457, majf=0, minf=1 00:35:06.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.513 issued rwts: total=5198,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.513 job1: (groupid=0, jobs=1): err= 0: pid=2262729: Tue Nov 19 18:34:07 2024 00:35:06.513 read: IOPS=9708, BW=37.9MiB/s (39.8MB/s)(38.0MiB/1002msec) 00:35:06.513 slat (nsec): min=946, max=7563.1k, avg=52690.61, stdev=411416.24 00:35:06.513 clat (usec): min=2306, max=16077, avg=6988.12, stdev=1930.75 00:35:06.513 lat (usec): min=2314, max=16101, avg=7040.81, stdev=1949.53 00:35:06.513 clat percentiles (usec): 00:35:06.513 | 1.00th=[ 3130], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 5407], 00:35:06.513 | 30.00th=[ 5866], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7046], 00:35:06.513 | 70.00th=[ 7570], 80.00th=[ 8586], 90.00th=[ 9503], 95.00th=[10683], 00:35:06.513 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14091], 99.95th=[14484], 00:35:06.513 | 99.99th=[16057] 00:35:06.513 write: IOPS=9730, BW=38.0MiB/s (39.9MB/s)(38.1MiB/1002msec); 0 zone resets 00:35:06.513 slat (nsec): min=1557, max=9898.6k, avg=45437.44, stdev=331165.90 00:35:06.513 clat (usec): min=1159, max=20700, avg=6067.84, stdev=1991.75 00:35:06.513 lat (usec): min=1170, max=20715, avg=6113.27, stdev=2002.75 00:35:06.513 clat percentiles (usec): 00:35:06.513 | 1.00th=[ 2409], 5.00th=[ 3621], 10.00th=[ 4015], 20.00th=[ 4686], 00:35:06.513 | 30.00th=[ 5342], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6194], 00:35:06.513 | 70.00th=[ 6456], 80.00th=[ 6915], 90.00th=[ 8094], 95.00th=[ 8979], 00:35:06.513 | 99.00th=[11600], 99.50th=[20055], 99.90th=[20317], 99.95th=[20317], 00:35:06.513 | 99.99th=[20579] 00:35:06.513 bw ( KiB/s): min=37824, max=40000, per=42.65%, avg=38912.00, stdev=1538.66, samples=2 00:35:06.513 iops : min= 9456, max=10000, avg=9728.00, stdev=384.67, samples=2 00:35:06.513 lat (msec) : 2=0.18%, 4=5.97%, 10=88.05%, 20=5.55%, 50=0.24% 00:35:06.513 cpu : usr=5.99%, sys=7.99%, ctx=659, majf=0, minf=1 00:35:06.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:35:06.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.513 issued rwts: total=9728,9750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.513 job2: (groupid=0, jobs=1): err= 0: pid=2262746: Tue Nov 19 18:34:07 2024 00:35:06.513 read: IOPS=4655, BW=18.2MiB/s (19.1MB/s)(18.4MiB/1010msec) 00:35:06.513 slat (nsec): min=964, max=11165k, avg=75154.30, stdev=573440.61 00:35:06.513 clat (usec): min=2083, max=23785, avg=10341.12, stdev=3673.00 00:35:06.513 lat (usec): min=2088, max=23795, avg=10416.27, stdev=3712.39 00:35:06.513 clat percentiles (usec): 00:35:06.513 | 1.00th=[ 3326], 5.00th=[ 5342], 10.00th=[ 6063], 20.00th=[ 7177], 00:35:06.513 | 30.00th=[ 7767], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[11076], 00:35:06.513 | 70.00th=[11863], 80.00th=[13042], 90.00th=[15139], 95.00th=[17171], 00:35:06.513 | 99.00th=[21103], 99.50th=[22152], 99.90th=[23725], 99.95th=[23725], 00:35:06.513 | 99.99th=[23725] 00:35:06.513 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:35:06.513 slat (nsec): min=1700, max=37322k, avg=105996.37, stdev=781781.90 00:35:06.513 clat (usec): min=695, max=64450, avg=15424.38, stdev=13784.59 00:35:06.513 lat (usec): min=714, max=64462, avg=15530.38, stdev=13867.85 00:35:06.513 clat percentiles (usec): 00:35:06.513 | 1.00th=[ 1012], 5.00th=[ 2245], 10.00th=[ 4146], 20.00th=[ 5932], 00:35:06.513 | 30.00th=[ 7373], 40.00th=[ 8356], 50.00th=[10290], 60.00th=[12780], 00:35:06.513 | 70.00th=[18220], 80.00th=[19530], 90.00th=[38536], 95.00th=[50594], 00:35:06.513 | 99.00th=[61604], 99.50th=[63177], 99.90th=[64226], 99.95th=[64226], 00:35:06.513 | 99.99th=[64226] 00:35:06.514 bw ( KiB/s): min=19304, max=21392, per=22.30%, avg=20348.00, stdev=1476.44, samples=2 00:35:06.514 iops : min= 4826, max= 5348, avg=5087.00, stdev=369.11, samples=2 00:35:06.514 lat (usec) : 750=0.03%, 1000=0.49% 00:35:06.514 lat (msec) : 2=1.72%, 4=3.29%, 10=41.89%, 20=41.79%, 50=8.01% 00:35:06.514 lat (msec) : 100=2.78% 00:35:06.514 cpu : usr=3.57%, sys=5.95%, ctx=451, majf=0, minf=1 00:35:06.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:06.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.514 issued rwts: total=4702,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.514 job3: (groupid=0, jobs=1): err= 0: pid=2262751: Tue Nov 19 18:34:07 2024 00:35:06.514 read: IOPS=2312, BW=9251KiB/s (9474kB/s)(9344KiB/1010msec) 00:35:06.514 slat (nsec): min=916, max=27696k, avg=222587.29, stdev=1542899.79 00:35:06.514 clat (usec): min=3290, max=80987, avg=28502.38, stdev=18943.35 00:35:06.514 lat (usec): min=3292, max=80994, avg=28724.97, stdev=19031.41 00:35:06.514 clat percentiles (usec): 00:35:06.514 | 1.00th=[ 6980], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[12387], 00:35:06.514 | 30.00th=[15270], 40.00th=[16909], 50.00th=[20055], 60.00th=[28181], 00:35:06.514 | 70.00th=[39060], 80.00th=[47973], 90.00th=[53740], 95.00th=[67634], 00:35:06.514 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:35:06.514 | 99.99th=[81265] 00:35:06.514 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:35:06.514 slat (nsec): min=1565, max=12603k, avg=182256.74, stdev=813361.36 00:35:06.514 clat (usec): min=1892, max=96756, avg=24031.85, stdev=19414.26 00:35:06.514 lat (usec): min=1899, max=96764, avg=24214.11, stdev=19535.12 00:35:06.514 clat percentiles (usec): 00:35:06.514 | 1.00th=[ 3621], 5.00th=[ 6194], 10.00th=[ 7504], 20.00th=[ 9896], 00:35:06.514 | 30.00th=[11600], 40.00th=[16057], 50.00th=[18744], 60.00th=[21103], 00:35:06.514 | 70.00th=[24249], 80.00th=[33817], 90.00th=[51119], 95.00th=[63701], 00:35:06.514 | 99.00th=[94897], 99.50th=[95945], 99.90th=[96994], 99.95th=[96994], 00:35:06.514 | 99.99th=[96994] 00:35:06.514 bw ( KiB/s): min= 8192, max=12288, per=11.22%, avg=10240.00, stdev=2896.31, samples=2 00:35:06.514 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:35:06.514 lat (msec) : 2=0.12%, 4=0.61%, 10=14.71%, 20=37.97%, 50=34.31% 00:35:06.514 lat (msec) : 100=12.28% 00:35:06.514 cpu : usr=1.39%, sys=2.58%, ctx=297, majf=0, minf=1 00:35:06.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:35:06.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:06.514 issued rwts: total=2336,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:06.514 00:35:06.514 Run status group 0 (all jobs): 00:35:06.514 READ: bw=84.9MiB/s (89.0MB/s), 9251KiB/s-37.9MiB/s (9474kB/s-39.8MB/s), io=85.8MiB (90.0MB), run=1002-1011msec 00:35:06.514 WRITE: bw=89.1MiB/s (93.4MB/s), 9.90MiB/s-38.0MiB/s (10.4MB/s-39.9MB/s), io=90.1MiB (94.5MB), run=1002-1011msec 00:35:06.514 00:35:06.514 Disk stats (read/write): 00:35:06.514 nvme0n1: ios=4588/4608, merge=0/0, ticks=48077/53905, in_queue=101982, util=87.17% 00:35:06.514 nvme0n2: ios=7920/8192, merge=0/0, ticks=53098/48227, in_queue=101325, util=90.93% 00:35:06.514 nvme0n3: ios=4113/4096, merge=0/0, ticks=38652/57699, in_queue=96351, util=91.99% 00:35:06.514 nvme0n4: ios=2105/2231, merge=0/0, ticks=18704/15844, in_queue=34548, util=97.01% 00:35:06.514 18:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:06.514 18:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2262910 00:35:06.514 18:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:06.514 18:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:06.514 [global] 00:35:06.514 thread=1 00:35:06.514 invalidate=1 00:35:06.514 rw=read 00:35:06.514 time_based=1 00:35:06.514 runtime=10 00:35:06.514 ioengine=libaio 00:35:06.514 direct=1 00:35:06.514 bs=4096 00:35:06.514 iodepth=1 00:35:06.514 norandommap=1 00:35:06.514 numjobs=1 00:35:06.514 00:35:06.514 [job0] 00:35:06.514 filename=/dev/nvme0n1 00:35:06.514 [job1] 00:35:06.514 filename=/dev/nvme0n2 00:35:06.514 [job2] 00:35:06.514 filename=/dev/nvme0n3 00:35:06.514 [job3] 00:35:06.514 filename=/dev/nvme0n4 00:35:06.514 Could not set queue depth (nvme0n1) 00:35:06.514 Could not set queue depth (nvme0n2) 00:35:06.514 Could not set queue depth (nvme0n3) 00:35:06.514 Could not set queue depth (nvme0n4) 00:35:06.789 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:06.789 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:06.789 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:06.789 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:06.789 fio-3.35 00:35:06.789 Starting 4 threads 00:35:09.334 18:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:09.334 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1646592, buflen=4096 00:35:09.334 fio: pid=2263192, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:09.334 18:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:09.597 18:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:09.597 18:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:09.597 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:35:09.597 fio: pid=2263186, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:09.857 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=753664, buflen=4096 00:35:09.857 fio: pid=2263147, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:09.857 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:09.857 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:10.119 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12783616, buflen=4096 00:35:10.119 fio: pid=2263163, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:10.119 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.119 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:10.119 00:35:10.119 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2263147: Tue Nov 19 18:34:11 2024 00:35:10.119 read: IOPS=62, BW=249KiB/s (255kB/s)(736KiB/2958msec) 00:35:10.119 slat (usec): min=7, max=241, avg=28.91, stdev=21.35 00:35:10.119 clat (usec): min=685, max=42084, avg=15910.54, stdev=19666.51 00:35:10.119 lat (usec): min=698, max=42110, avg=15939.46, stdev=19668.98 00:35:10.119 clat percentiles (usec): 00:35:10.119 | 1.00th=[ 693], 5.00th=[ 824], 10.00th=[ 971], 20.00th=[ 1045], 00:35:10.119 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1237], 00:35:10.119 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:10.119 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:10.119 | 99.99th=[42206] 00:35:10.119 bw ( KiB/s): min= 96, max= 992, per=5.74%, avg=276.80, stdev=399.82, samples=5 00:35:10.119 iops : min= 24, max= 248, avg=69.20, stdev=99.96, samples=5 00:35:10.119 lat (usec) : 750=2.16%, 1000=10.27% 00:35:10.119 lat (msec) : 2=50.81%, 50=36.22% 00:35:10.119 cpu : usr=0.03%, sys=0.27%, ctx=190, majf=0, minf=2 00:35:10.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.119 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.119 issued rwts: total=185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:10.119 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2263163: Tue Nov 19 18:34:11 2024 00:35:10.119 read: IOPS=994, BW=3976KiB/s (4071kB/s)(12.2MiB/3140msec) 00:35:10.119 slat (usec): min=6, max=32784, avg=52.41, stdev=776.79 00:35:10.119 clat (usec): min=438, max=6057, avg=938.52, stdev=144.77 00:35:10.119 lat (usec): min=445, max=33723, avg=990.94, stdev=790.73 00:35:10.119 clat percentiles (usec): 00:35:10.119 | 1.00th=[ 652], 5.00th=[ 775], 10.00th=[ 824], 20.00th=[ 873], 00:35:10.119 | 30.00th=[ 914], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 955], 00:35:10.119 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1029], 95.00th=[ 1074], 00:35:10.119 | 99.00th=[ 1188], 99.50th=[ 1467], 99.90th=[ 2040], 99.95th=[ 2278], 00:35:10.119 | 99.99th=[ 6063] 00:35:10.119 bw ( KiB/s): min= 3534, max= 4224, per=83.76%, avg=4027.67, stdev=247.93, samples=6 00:35:10.119 iops : min= 883, max= 1056, avg=1006.83, stdev=62.18, samples=6 00:35:10.119 lat (usec) : 500=0.06%, 750=4.16%, 1000=79.12% 00:35:10.119 lat (msec) : 2=16.43%, 4=0.16%, 10=0.03% 00:35:10.119 cpu : usr=2.07%, sys=3.70%, ctx=3128, majf=0, minf=1 00:35:10.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.119 issued rwts: total=3122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:10.119 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2263186: Tue Nov 19 18:34:11 2024 00:35:10.119 read: IOPS=24, BW=95.7KiB/s (98.0kB/s)(268KiB/2800msec) 00:35:10.119 slat (usec): min=12, max=13755, avg=228.85, stdev=1664.86 00:35:10.119 clat (usec): min=1035, max=45470, avg=41229.52, stdev=5021.47 00:35:10.119 lat (usec): min=1099, max=55040, avg=41461.39, stdev=5291.95 00:35:10.119 clat percentiles (usec): 00:35:10.119 | 1.00th=[ 1037], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:10.119 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:10.119 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:10.119 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:35:10.119 | 99.99th=[45351] 00:35:10.119 bw ( KiB/s): min= 88, max= 104, per=2.00%, avg=96.00, stdev= 5.66, samples=5 00:35:10.119 iops : min= 22, max= 26, avg=24.00, stdev= 1.41, samples=5 00:35:10.119 lat (msec) : 2=1.47%, 50=97.06% 00:35:10.119 cpu : usr=0.14%, sys=0.00%, ctx=70, majf=0, minf=1 00:35:10.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.119 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.119 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:10.119 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2263192: Tue Nov 19 18:34:11 2024 00:35:10.119 read: IOPS=155, BW=621KiB/s (636kB/s)(1608KiB/2588msec) 00:35:10.119 slat (nsec): min=6468, max=63337, avg=26175.51, stdev=4811.56 00:35:10.119 clat (usec): min=330, max=41665, avg=6341.39, stdev=13665.99 00:35:10.119 lat (usec): min=357, max=41673, avg=6367.57, stdev=13665.43 00:35:10.119 clat percentiles (usec): 00:35:10.119 | 1.00th=[ 461], 5.00th=[ 594], 10.00th=[ 619], 20.00th=[ 685], 00:35:10.119 | 30.00th=[ 758], 40.00th=[ 889], 50.00th=[ 1012], 60.00th=[ 1106], 00:35:10.119 | 70.00th=[ 1172], 80.00th=[ 1254], 90.00th=[41157], 95.00th=[41157], 00:35:10.119 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:35:10.119 | 99.99th=[41681] 00:35:10.119 bw ( KiB/s): min= 96, max= 1784, per=13.31%, avg=640.00, stdev=777.54, samples=5 00:35:10.119 iops : min= 24, max= 446, avg=160.00, stdev=194.39, samples=5 00:35:10.119 lat (usec) : 500=1.74%, 750=27.54%, 1000=19.11% 00:35:10.119 lat (msec) : 2=35.98%, 4=1.99%, 50=13.40% 00:35:10.119 cpu : usr=0.35%, sys=0.50%, ctx=403, majf=0, minf=2 00:35:10.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.119 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.119 issued rwts: total=403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:10.120 00:35:10.120 Run status group 0 (all jobs): 00:35:10.120 READ: bw=4808KiB/s (4923kB/s), 95.7KiB/s-3976KiB/s (98.0kB/s-4071kB/s), io=14.7MiB (15.5MB), run=2588-3140msec 00:35:10.120 00:35:10.120 Disk stats (read/write): 00:35:10.120 nvme0n1: ios=200/0, merge=0/0, ticks=3144/0, in_queue=3144, util=98.96% 00:35:10.120 nvme0n2: ios=3095/0, merge=0/0, ticks=2523/0, in_queue=2523, util=93.28% 00:35:10.120 nvme0n3: ios=62/0, merge=0/0, ticks=2555/0, in_queue=2555, util=95.99% 00:35:10.120 nvme0n4: ios=396/0, merge=0/0, ticks=2270/0, in_queue=2270, util=96.06% 00:35:10.120 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.120 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:10.380 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.380 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:10.641 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.641 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:10.641 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:10.641 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2262910 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:10.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:10.902 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:11.162 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:11.162 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:11.162 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:11.162 nvmf hotplug test: fio failed as expected 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.163 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.163 rmmod nvme_tcp 00:35:11.163 rmmod nvme_fabrics 00:35:11.163 rmmod nvme_keyring 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2259734 ']' 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2259734 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2259734 ']' 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2259734 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2259734 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2259734' 00:35:11.423 killing process with pid 2259734 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2259734 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2259734 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.423 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.967 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:13.967 00:35:13.967 real 0m28.375s 00:35:13.967 user 2m27.181s 00:35:13.967 sys 0m12.303s 00:35:13.967 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.967 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.967 ************************************ 00:35:13.967 END TEST nvmf_fio_target 00:35:13.967 ************************************ 00:35:13.967 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:13.967 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:13.967 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.967 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:13.967 ************************************ 00:35:13.967 START TEST nvmf_bdevio 00:35:13.967 ************************************ 00:35:13.967 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:13.967 * Looking for test storage... 00:35:13.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.967 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:13.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.968 --rc genhtml_branch_coverage=1 00:35:13.968 --rc genhtml_function_coverage=1 00:35:13.968 --rc genhtml_legend=1 00:35:13.968 --rc geninfo_all_blocks=1 00:35:13.968 --rc geninfo_unexecuted_blocks=1 00:35:13.968 00:35:13.968 ' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:13.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.968 --rc genhtml_branch_coverage=1 00:35:13.968 --rc genhtml_function_coverage=1 00:35:13.968 --rc genhtml_legend=1 00:35:13.968 --rc geninfo_all_blocks=1 00:35:13.968 --rc geninfo_unexecuted_blocks=1 00:35:13.968 00:35:13.968 ' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:13.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.968 --rc genhtml_branch_coverage=1 00:35:13.968 --rc genhtml_function_coverage=1 00:35:13.968 --rc genhtml_legend=1 00:35:13.968 --rc geninfo_all_blocks=1 00:35:13.968 --rc geninfo_unexecuted_blocks=1 00:35:13.968 00:35:13.968 ' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:13.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.968 --rc genhtml_branch_coverage=1 00:35:13.968 --rc genhtml_function_coverage=1 00:35:13.968 --rc genhtml_legend=1 00:35:13.968 --rc geninfo_all_blocks=1 00:35:13.968 --rc geninfo_unexecuted_blocks=1 00:35:13.968 00:35:13.968 ' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.968 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.969 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:13.969 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:13.969 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:13.969 18:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:22.111 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:22.112 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:22.112 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:22.112 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:22.112 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:22.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:22.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:35:22.112 00:35:22.112 --- 10.0.0.2 ping statistics --- 00:35:22.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.112 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:22.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:22.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:35:22.112 00:35:22.112 --- 10.0.0.1 ping statistics --- 00:35:22.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:22.112 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2268160 00:35:22.112 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2268160 00:35:22.113 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:22.113 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2268160 ']' 00:35:22.113 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.113 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.113 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.113 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.113 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.113 [2024-11-19 18:34:22.732237] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:22.113 [2024-11-19 18:34:22.733363] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:35:22.113 [2024-11-19 18:34:22.733414] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:22.113 [2024-11-19 18:34:22.832374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:22.113 [2024-11-19 18:34:22.885044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:22.113 [2024-11-19 18:34:22.885097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:22.113 [2024-11-19 18:34:22.885106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:22.113 [2024-11-19 18:34:22.885113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:22.113 [2024-11-19 18:34:22.885119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:22.113 [2024-11-19 18:34:22.887148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:22.113 [2024-11-19 18:34:22.887310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:22.113 [2024-11-19 18:34:22.887462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:22.113 [2024-11-19 18:34:22.887462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:22.113 [2024-11-19 18:34:22.963626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:22.113 [2024-11-19 18:34:22.964541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:22.113 [2024-11-19 18:34:22.964759] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:22.113 [2024-11-19 18:34:22.965339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:22.113 [2024-11-19 18:34:22.965344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:22.113 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.113 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:22.113 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:22.113 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:22.113 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.374 [2024-11-19 18:34:23.596569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.374 Malloc0 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:22.374 [2024-11-19 18:34:23.688623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:22.374 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:22.374 { 00:35:22.374 "params": { 00:35:22.374 "name": "Nvme$subsystem", 00:35:22.374 "trtype": "$TEST_TRANSPORT", 00:35:22.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.374 "adrfam": "ipv4", 00:35:22.374 "trsvcid": "$NVMF_PORT", 00:35:22.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.374 "hdgst": ${hdgst:-false}, 00:35:22.375 "ddgst": ${ddgst:-false} 00:35:22.375 }, 00:35:22.375 "method": "bdev_nvme_attach_controller" 00:35:22.375 } 00:35:22.375 EOF 00:35:22.375 )") 00:35:22.375 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:22.375 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:22.375 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:22.375 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:22.375 "params": { 00:35:22.375 "name": "Nvme1", 00:35:22.375 "trtype": "tcp", 00:35:22.375 "traddr": "10.0.0.2", 00:35:22.375 "adrfam": "ipv4", 00:35:22.375 "trsvcid": "4420", 00:35:22.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:22.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:22.375 "hdgst": false, 00:35:22.375 "ddgst": false 00:35:22.375 }, 00:35:22.375 "method": "bdev_nvme_attach_controller" 00:35:22.375 }' 00:35:22.375 [2024-11-19 18:34:23.747929] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:35:22.375 [2024-11-19 18:34:23.748002] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268475 ] 00:35:22.636 [2024-11-19 18:34:23.842801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:22.636 [2024-11-19 18:34:23.900487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.636 [2024-11-19 18:34:23.900650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.636 [2024-11-19 18:34:23.900650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:22.897 I/O targets: 00:35:22.897 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:22.897 00:35:22.897 00:35:22.897 CUnit - A unit testing framework for C - Version 2.1-3 00:35:22.897 http://cunit.sourceforge.net/ 00:35:22.897 00:35:22.897 00:35:22.897 Suite: bdevio tests on: Nvme1n1 00:35:22.897 Test: blockdev write read block ...passed 00:35:22.897 Test: blockdev write zeroes read block ...passed 00:35:22.897 Test: blockdev write zeroes read no split ...passed 00:35:22.897 Test: blockdev write zeroes read split ...passed 00:35:23.158 Test: blockdev write zeroes read split partial ...passed 00:35:23.158 Test: blockdev reset ...[2024-11-19 18:34:24.389005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:23.158 [2024-11-19 18:34:24.389112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92c970 (9): Bad file descriptor 00:35:23.158 [2024-11-19 18:34:24.401278] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:23.158 passed 00:35:23.158 Test: blockdev write read 8 blocks ...passed 00:35:23.158 Test: blockdev write read size > 128k ...passed 00:35:23.158 Test: blockdev write read invalid size ...passed 00:35:23.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:23.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:23.158 Test: blockdev write read max offset ...passed 00:35:23.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:23.418 Test: blockdev writev readv 8 blocks ...passed 00:35:23.419 Test: blockdev writev readv 30 x 1block ...passed 00:35:23.419 Test: blockdev writev readv block ...passed 00:35:23.419 Test: blockdev writev readv size > 128k ...passed 00:35:23.419 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:23.419 Test: blockdev comparev and writev ...[2024-11-19 18:34:24.744636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:23.419 [2024-11-19 18:34:24.744687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.744712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:23.419 [2024-11-19 18:34:24.744721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.745239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:23.419 [2024-11-19 18:34:24.745257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.745271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:23.419 [2024-11-19 18:34:24.745280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.745771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:23.419 [2024-11-19 18:34:24.745785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.745799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:23.419 [2024-11-19 18:34:24.745807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.746255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:23.419 [2024-11-19 18:34:24.746269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.746283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:23.419 [2024-11-19 18:34:24.746291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:23.419 passed 00:35:23.419 Test: blockdev nvme passthru rw ...passed 00:35:23.419 Test: blockdev nvme passthru vendor specific ...[2024-11-19 18:34:24.829699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:23.419 [2024-11-19 18:34:24.829718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.829956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:23.419 [2024-11-19 18:34:24.829970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.830204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:23.419 [2024-11-19 18:34:24.830217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:23.419 [2024-11-19 18:34:24.830441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:23.419 [2024-11-19 18:34:24.830453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:23.419 passed 00:35:23.419 Test: blockdev nvme admin passthru ...passed 00:35:23.680 Test: blockdev copy ...passed 00:35:23.680 00:35:23.680 Run Summary: Type Total Ran Passed Failed Inactive 00:35:23.680 suites 1 1 n/a 0 0 00:35:23.680 tests 23 23 23 0 0 00:35:23.680 asserts 152 152 152 0 n/a 00:35:23.680 00:35:23.680 Elapsed time = 1.340 seconds 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:23.680 rmmod nvme_tcp 00:35:23.680 rmmod nvme_fabrics 00:35:23.680 rmmod nvme_keyring 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2268160 ']' 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2268160 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2268160 ']' 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2268160 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.680 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268160 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268160' 00:35:23.940 killing process with pid 2268160 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2268160 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2268160 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:23.940 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.485 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:26.486 00:35:26.486 real 0m12.448s 00:35:26.486 user 0m11.052s 00:35:26.486 sys 0m6.496s 00:35:26.486 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.486 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:26.486 ************************************ 00:35:26.486 END TEST nvmf_bdevio 00:35:26.486 ************************************ 00:35:26.486 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:26.486 00:35:26.486 real 5m1.118s 00:35:26.486 user 10m28.033s 00:35:26.486 sys 2m6.836s 00:35:26.486 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.486 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:26.486 ************************************ 00:35:26.486 END TEST nvmf_target_core_interrupt_mode 00:35:26.486 ************************************ 00:35:26.486 18:34:27 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:26.486 18:34:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:26.486 18:34:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.486 18:34:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:26.486 ************************************ 00:35:26.486 START TEST nvmf_interrupt 00:35:26.486 ************************************ 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:26.486 * Looking for test storage... 00:35:26.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.486 --rc genhtml_branch_coverage=1 00:35:26.486 --rc genhtml_function_coverage=1 00:35:26.486 --rc genhtml_legend=1 00:35:26.486 --rc geninfo_all_blocks=1 00:35:26.486 --rc geninfo_unexecuted_blocks=1 00:35:26.486 00:35:26.486 ' 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.486 --rc genhtml_branch_coverage=1 00:35:26.486 --rc genhtml_function_coverage=1 00:35:26.486 --rc genhtml_legend=1 00:35:26.486 --rc geninfo_all_blocks=1 00:35:26.486 --rc geninfo_unexecuted_blocks=1 00:35:26.486 00:35:26.486 ' 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.486 --rc genhtml_branch_coverage=1 00:35:26.486 --rc genhtml_function_coverage=1 00:35:26.486 --rc genhtml_legend=1 00:35:26.486 --rc geninfo_all_blocks=1 00:35:26.486 --rc geninfo_unexecuted_blocks=1 00:35:26.486 00:35:26.486 ' 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.486 --rc genhtml_branch_coverage=1 00:35:26.486 --rc genhtml_function_coverage=1 00:35:26.486 --rc genhtml_legend=1 00:35:26.486 --rc geninfo_all_blocks=1 00:35:26.486 --rc geninfo_unexecuted_blocks=1 00:35:26.486 00:35:26.486 ' 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.486 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:26.487 18:34:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:34.625 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:34.625 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:34.625 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:34.625 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:34.625 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.626 18:34:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:34.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:35:34.626 00:35:34.626 --- 10.0.0.2 ping statistics --- 00:35:34.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.626 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:35:34.626 00:35:34.626 --- 10.0.0.1 ping statistics --- 00:35:34.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.626 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2272823 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2272823 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2272823 ']' 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.626 18:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.626 [2024-11-19 18:34:35.387204] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:34.626 [2024-11-19 18:34:35.388723] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:35:34.626 [2024-11-19 18:34:35.388793] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.626 [2024-11-19 18:34:35.489593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:34.626 [2024-11-19 18:34:35.541018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.626 [2024-11-19 18:34:35.541071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.626 [2024-11-19 18:34:35.541080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:34.626 [2024-11-19 18:34:35.541087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:34.626 [2024-11-19 18:34:35.541093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.626 [2024-11-19 18:34:35.542745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.626 [2024-11-19 18:34:35.542749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.626 [2024-11-19 18:34:35.619011] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:34.626 [2024-11-19 18:34:35.619581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:34.626 [2024-11-19 18:34:35.619885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:34.887 5000+0 records in 00:35:34.887 5000+0 records out 00:35:34.887 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0195279 s, 524 MB/s 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:34.887 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.888 AIO0 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.888 [2024-11-19 18:34:36.327797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.888 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:35.150 [2024-11-19 18:34:36.372226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2272823 0 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2272823 0 idle 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2272823 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2272823 -w 256 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2272823 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.32 reactor_0' 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2272823 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.32 reactor_0 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2272823 1 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2272823 1 idle 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2272823 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2272823 -w 256 00:35:35.150 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:35.411 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2272828 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:35.411 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2272828 root 20 0 128.2g 42624 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:35.411 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:35.411 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:35.411 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:35.411 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2273191 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2272823 0 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2272823 0 busy 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2272823 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2272823 -w 256 00:35:35.412 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2272823 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.49 reactor_0' 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2272823 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.49 reactor_0 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2272823 1 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2272823 1 busy 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2272823 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2272823 -w 256 00:35:35.673 18:34:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:35.673 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2272828 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.27 reactor_1' 00:35:35.673 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2272828 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.27 reactor_1 00:35:35.673 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:35.673 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:35.673 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:35.935 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:35.935 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:35.935 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:35.935 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:35.935 18:34:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:35.935 18:34:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2273191 00:35:45.935 Initializing NVMe Controllers 00:35:45.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:45.935 Controller IO queue size 256, less than required. 00:35:45.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:45.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:45.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:45.935 Initialization complete. Launching workers. 00:35:45.935 ======================================================== 00:35:45.935 Latency(us) 00:35:45.935 Device Information : IOPS MiB/s Average min max 00:35:45.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19149.50 74.80 13372.87 4406.58 33344.67 00:35:45.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19745.60 77.13 12965.65 8457.53 27768.15 00:35:45.935 ======================================================== 00:35:45.935 Total : 38895.10 151.93 13166.14 4406.58 33344.67 00:35:45.935 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2272823 0 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2272823 0 idle 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2272823 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:45.935 18:34:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2272823 -w 256 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2272823 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.31 reactor_0' 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2272823 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.31 reactor_0 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2272823 1 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2272823 1 idle 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2272823 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:45.935 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2272823 -w 256 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2272828 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2272828 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:45.936 18:34:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:46.507 18:34:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:46.507 18:34:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:46.507 18:34:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:46.507 18:34:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:46.507 18:34:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2272823 0 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2272823 0 idle 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2272823 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2272823 -w 256 00:35:49.063 18:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:49.063 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2272823 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.69 reactor_0' 00:35:49.063 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2272823 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.69 reactor_0 00:35:49.063 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:49.063 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:49.063 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:49.063 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:49.063 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:49.063 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2272823 1 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2272823 1 idle 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2272823 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2272823 -w 256 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2272828 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2272828 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:49.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:49.064 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:49.064 rmmod nvme_tcp 00:35:49.064 rmmod nvme_fabrics 00:35:49.064 rmmod nvme_keyring 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2272823 ']' 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2272823 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2272823 ']' 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2272823 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2272823 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2272823' 00:35:49.325 killing process with pid 2272823 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2272823 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2272823 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:49.325 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:49.586 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:49.586 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:49.586 18:34:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.586 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:49.586 18:34:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:51.498 18:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:51.498 00:35:51.498 real 0m25.297s 00:35:51.498 user 0m40.312s 00:35:51.498 sys 0m9.761s 00:35:51.498 18:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.498 18:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:51.498 ************************************ 00:35:51.498 END TEST nvmf_interrupt 00:35:51.498 ************************************ 00:35:51.498 00:35:51.498 real 30m6.176s 00:35:51.498 user 61m27.852s 00:35:51.498 sys 10m19.839s 00:35:51.498 18:34:52 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.498 18:34:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:51.498 ************************************ 00:35:51.498 END TEST nvmf_tcp 00:35:51.498 ************************************ 00:35:51.498 18:34:52 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:51.498 18:34:52 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:51.498 18:34:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:51.498 18:34:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:51.498 18:34:52 -- common/autotest_common.sh@10 -- # set +x 00:35:51.759 ************************************ 00:35:51.759 START TEST spdkcli_nvmf_tcp 00:35:51.759 ************************************ 00:35:51.759 18:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:51.759 * Looking for test storage... 00:35:51.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:51.759 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:51.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.760 --rc genhtml_branch_coverage=1 00:35:51.760 --rc genhtml_function_coverage=1 00:35:51.760 --rc genhtml_legend=1 00:35:51.760 --rc geninfo_all_blocks=1 00:35:51.760 --rc geninfo_unexecuted_blocks=1 00:35:51.760 00:35:51.760 ' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:51.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.760 --rc genhtml_branch_coverage=1 00:35:51.760 --rc genhtml_function_coverage=1 00:35:51.760 --rc genhtml_legend=1 00:35:51.760 --rc geninfo_all_blocks=1 00:35:51.760 --rc geninfo_unexecuted_blocks=1 00:35:51.760 00:35:51.760 ' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:51.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.760 --rc genhtml_branch_coverage=1 00:35:51.760 --rc genhtml_function_coverage=1 00:35:51.760 --rc genhtml_legend=1 00:35:51.760 --rc geninfo_all_blocks=1 00:35:51.760 --rc geninfo_unexecuted_blocks=1 00:35:51.760 00:35:51.760 ' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:51.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:51.760 --rc genhtml_branch_coverage=1 00:35:51.760 --rc genhtml_function_coverage=1 00:35:51.760 --rc genhtml_legend=1 00:35:51.760 --rc geninfo_all_blocks=1 00:35:51.760 --rc geninfo_unexecuted_blocks=1 00:35:51.760 00:35:51.760 ' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:51.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2276367 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2276367 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2276367 ']' 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.760 18:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.020 [2024-11-19 18:34:53.287752] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:35:52.020 [2024-11-19 18:34:53.287826] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276367 ] 00:35:52.020 [2024-11-19 18:34:53.377813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:52.020 [2024-11-19 18:34:53.430964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.020 [2024-11-19 18:34:53.430969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.964 18:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:52.964 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:52.964 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:52.964 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:52.964 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:52.964 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:52.964 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:52.964 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:52.964 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:52.964 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:52.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:52.964 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:52.964 ' 00:35:55.509 [2024-11-19 18:34:56.886171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.891 [2024-11-19 18:34:58.246400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:59.451 [2024-11-19 18:35:00.781563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:01.993 [2024-11-19 18:35:03.003845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:03.373 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:03.373 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:03.373 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:03.373 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:03.373 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:03.373 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:03.373 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:03.373 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:03.373 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:03.373 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:03.373 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:03.373 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:03.373 18:35:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:03.373 18:35:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.373 18:35:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.373 18:35:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:03.373 18:35:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:03.373 18:35:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.373 18:35:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:03.373 18:35:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:03.942 18:35:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:03.942 18:35:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:03.942 18:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:03.942 18:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.942 18:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.942 18:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:03.942 18:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:03.942 18:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.942 18:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:03.942 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:03.942 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:03.942 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:03.942 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:03.942 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:03.942 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:03.942 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:03.942 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:03.942 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:03.942 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:03.942 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:03.942 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:03.942 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:03.942 ' 00:36:10.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:10.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:10.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:10.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:10.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:10.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:10.526 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:10.526 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:10.526 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:10.526 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:10.526 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:10.526 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:10.526 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:10.526 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:10.526 18:35:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:10.526 18:35:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:10.526 18:35:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2276367 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2276367 ']' 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2276367 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2276367 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2276367' 00:36:10.526 killing process with pid 2276367 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2276367 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2276367 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2276367 ']' 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2276367 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2276367 ']' 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2276367 00:36:10.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2276367) - No such process 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2276367 is not found' 00:36:10.526 Process with pid 2276367 is not found 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:10.526 00:36:10.526 real 0m18.200s 00:36:10.526 user 0m40.381s 00:36:10.526 sys 0m0.943s 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:10.526 18:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:10.526 ************************************ 00:36:10.526 END TEST spdkcli_nvmf_tcp 00:36:10.526 ************************************ 00:36:10.526 18:35:11 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:10.526 18:35:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:10.526 18:35:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.526 18:35:11 -- common/autotest_common.sh@10 -- # set +x 00:36:10.526 ************************************ 00:36:10.526 START TEST nvmf_identify_passthru 00:36:10.526 ************************************ 00:36:10.526 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:10.526 * Looking for test storage... 00:36:10.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:10.526 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:10.526 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:36:10.526 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:10.526 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:10.526 18:35:11 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:10.526 18:35:11 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:10.526 18:35:11 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:10.527 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:10.527 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:10.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.527 --rc genhtml_branch_coverage=1 00:36:10.527 --rc genhtml_function_coverage=1 00:36:10.527 --rc genhtml_legend=1 00:36:10.527 --rc geninfo_all_blocks=1 00:36:10.527 --rc geninfo_unexecuted_blocks=1 00:36:10.527 00:36:10.527 ' 00:36:10.527 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:10.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.527 --rc genhtml_branch_coverage=1 00:36:10.527 --rc genhtml_function_coverage=1 00:36:10.527 --rc genhtml_legend=1 00:36:10.527 --rc geninfo_all_blocks=1 00:36:10.527 --rc geninfo_unexecuted_blocks=1 00:36:10.527 00:36:10.527 ' 00:36:10.527 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:10.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.527 --rc genhtml_branch_coverage=1 00:36:10.527 --rc genhtml_function_coverage=1 00:36:10.527 --rc genhtml_legend=1 00:36:10.527 --rc geninfo_all_blocks=1 00:36:10.527 --rc geninfo_unexecuted_blocks=1 00:36:10.527 00:36:10.527 ' 00:36:10.527 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:10.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:10.527 --rc genhtml_branch_coverage=1 00:36:10.527 --rc genhtml_function_coverage=1 00:36:10.527 --rc genhtml_legend=1 00:36:10.527 --rc geninfo_all_blocks=1 00:36:10.527 --rc geninfo_unexecuted_blocks=1 00:36:10.527 00:36:10.527 ' 00:36:10.527 18:35:11 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.527 18:35:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.527 18:35:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.527 18:35:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.527 18:35:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:10.527 18:35:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:10.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:10.527 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:10.527 18:35:11 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:10.527 18:35:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:10.528 18:35:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.528 18:35:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.528 18:35:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.528 18:35:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:10.528 18:35:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:10.528 18:35:11 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:10.528 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:10.528 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:10.528 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:10.528 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:10.528 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:10.528 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.528 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:10.528 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.528 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:10.528 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:10.528 18:35:11 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:10.528 18:35:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:17.313 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:17.313 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:17.313 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:17.313 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:17.313 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:17.314 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:17.314 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:17.314 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:17.314 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:17.314 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:17.314 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:17.314 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:17.314 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:17.576 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:17.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:17.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:36:17.577 00:36:17.577 --- 10.0.0.2 ping statistics --- 00:36:17.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.577 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:17.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:17.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:36:17.577 00:36:17.577 --- 10.0.0.1 ping statistics --- 00:36:17.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.577 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:17.577 18:35:18 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:17.577 18:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:17.577 18:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:17.577 18:35:18 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:17.577 18:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:17.577 18:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:17.577 18:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:17.577 18:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:17.577 18:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:18.151 18:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:18.151 18:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:18.151 18:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:18.151 18:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:18.720 18:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:18.720 18:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:18.720 18:35:19 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:18.720 18:35:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:18.720 18:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:18.720 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:18.720 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:18.720 18:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2283788 00:36:18.720 18:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:18.720 18:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:18.720 18:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2283788 00:36:18.720 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2283788 ']' 00:36:18.720 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.720 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.720 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.720 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.720 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:18.720 [2024-11-19 18:35:20.103872] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:36:18.720 [2024-11-19 18:35:20.103951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:18.981 [2024-11-19 18:35:20.205333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:18.981 [2024-11-19 18:35:20.259728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:18.981 [2024-11-19 18:35:20.259787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:18.981 [2024-11-19 18:35:20.259797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:18.981 [2024-11-19 18:35:20.259804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:18.981 [2024-11-19 18:35:20.259810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:18.981 [2024-11-19 18:35:20.261902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.981 [2024-11-19 18:35:20.262063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:18.981 [2024-11-19 18:35:20.262225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:18.981 [2024-11-19 18:35:20.262226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.553 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.553 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:19.553 18:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:19.553 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.553 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.553 INFO: Log level set to 20 00:36:19.553 INFO: Requests: 00:36:19.553 { 00:36:19.553 "jsonrpc": "2.0", 00:36:19.553 "method": "nvmf_set_config", 00:36:19.553 "id": 1, 00:36:19.553 "params": { 00:36:19.553 "admin_cmd_passthru": { 00:36:19.553 "identify_ctrlr": true 00:36:19.553 } 00:36:19.553 } 00:36:19.553 } 00:36:19.553 00:36:19.553 INFO: response: 00:36:19.553 { 00:36:19.553 "jsonrpc": "2.0", 00:36:19.553 "id": 1, 00:36:19.553 "result": true 00:36:19.553 } 00:36:19.553 00:36:19.553 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.553 18:35:20 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:19.553 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.553 18:35:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.553 INFO: Setting log level to 20 00:36:19.553 INFO: Setting log level to 20 00:36:19.553 INFO: Log level set to 20 00:36:19.553 INFO: Log level set to 20 00:36:19.553 INFO: Requests: 00:36:19.553 { 00:36:19.553 "jsonrpc": "2.0", 00:36:19.553 "method": "framework_start_init", 00:36:19.553 "id": 1 00:36:19.553 } 00:36:19.553 00:36:19.553 INFO: Requests: 00:36:19.553 { 00:36:19.553 "jsonrpc": "2.0", 00:36:19.553 "method": "framework_start_init", 00:36:19.553 "id": 1 00:36:19.553 } 00:36:19.553 00:36:19.814 [2024-11-19 18:35:21.020793] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:19.814 INFO: response: 00:36:19.814 { 00:36:19.814 "jsonrpc": "2.0", 00:36:19.814 "id": 1, 00:36:19.814 "result": true 00:36:19.814 } 00:36:19.814 00:36:19.814 INFO: response: 00:36:19.814 { 00:36:19.814 "jsonrpc": "2.0", 00:36:19.814 "id": 1, 00:36:19.814 "result": true 00:36:19.814 } 00:36:19.814 00:36:19.814 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.814 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:19.814 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.814 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.814 INFO: Setting log level to 40 00:36:19.814 INFO: Setting log level to 40 00:36:19.814 INFO: Setting log level to 40 00:36:19.814 [2024-11-19 18:35:21.034359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.814 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.814 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:19.814 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:19.814 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:19.814 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:19.814 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.814 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 Nvme0n1 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.075 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.075 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.075 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 [2024-11-19 18:35:21.437308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.075 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.075 [ 00:36:20.075 { 00:36:20.075 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:20.075 "subtype": "Discovery", 00:36:20.075 "listen_addresses": [], 00:36:20.075 "allow_any_host": true, 00:36:20.075 "hosts": [] 00:36:20.075 }, 00:36:20.075 { 00:36:20.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:20.075 "subtype": "NVMe", 00:36:20.075 "listen_addresses": [ 00:36:20.075 { 00:36:20.075 "trtype": "TCP", 00:36:20.075 "adrfam": "IPv4", 00:36:20.075 "traddr": "10.0.0.2", 00:36:20.075 "trsvcid": "4420" 00:36:20.075 } 00:36:20.075 ], 00:36:20.075 "allow_any_host": true, 00:36:20.075 "hosts": [], 00:36:20.075 "serial_number": "SPDK00000000000001", 00:36:20.075 "model_number": "SPDK bdev Controller", 00:36:20.075 "max_namespaces": 1, 00:36:20.075 "min_cntlid": 1, 00:36:20.075 "max_cntlid": 65519, 00:36:20.075 "namespaces": [ 00:36:20.075 { 00:36:20.075 "nsid": 1, 00:36:20.075 "bdev_name": "Nvme0n1", 00:36:20.075 "name": "Nvme0n1", 00:36:20.075 "nguid": "36344730526054870025384500000044", 00:36:20.075 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:20.075 } 00:36:20.075 ] 00:36:20.075 } 00:36:20.075 ] 00:36:20.075 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.075 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:20.075 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:20.075 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:20.336 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:20.336 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:20.336 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:20.336 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:20.596 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:20.596 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:20.596 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:20.596 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:20.596 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.596 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.596 18:35:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.596 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:20.596 18:35:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:20.596 18:35:21 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:20.596 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:20.596 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:20.596 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:20.596 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:20.596 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:20.596 rmmod nvme_tcp 00:36:20.596 rmmod nvme_fabrics 00:36:20.596 rmmod nvme_keyring 00:36:20.596 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:20.857 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:20.857 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:20.857 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2283788 ']' 00:36:20.857 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2283788 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2283788 ']' 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2283788 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2283788 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2283788' 00:36:20.857 killing process with pid 2283788 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2283788 00:36:20.857 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2283788 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:21.117 18:35:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.117 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:21.117 18:35:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:23.662 18:35:24 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:23.662 00:36:23.662 real 0m13.261s 00:36:23.662 user 0m10.904s 00:36:23.662 sys 0m6.684s 00:36:23.662 18:35:24 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:23.662 18:35:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:23.662 ************************************ 00:36:23.662 END TEST nvmf_identify_passthru 00:36:23.662 ************************************ 00:36:23.662 18:35:24 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:23.662 18:35:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:23.662 18:35:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:23.662 18:35:24 -- common/autotest_common.sh@10 -- # set +x 00:36:23.662 ************************************ 00:36:23.662 START TEST nvmf_dif 00:36:23.662 ************************************ 00:36:23.662 18:35:24 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:23.662 * Looking for test storage... 00:36:23.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:23.662 18:35:24 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:23.662 18:35:24 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:23.662 18:35:24 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:23.662 18:35:24 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:23.662 18:35:24 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:23.662 18:35:24 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:23.662 18:35:24 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:23.663 18:35:24 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:23.663 18:35:24 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:23.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:23.663 --rc genhtml_branch_coverage=1 00:36:23.663 --rc genhtml_function_coverage=1 00:36:23.663 --rc genhtml_legend=1 00:36:23.663 --rc geninfo_all_blocks=1 00:36:23.663 --rc geninfo_unexecuted_blocks=1 00:36:23.663 00:36:23.663 ' 00:36:23.663 18:35:24 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:23.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:23.663 --rc genhtml_branch_coverage=1 00:36:23.663 --rc genhtml_function_coverage=1 00:36:23.663 --rc genhtml_legend=1 00:36:23.663 --rc geninfo_all_blocks=1 00:36:23.663 --rc geninfo_unexecuted_blocks=1 00:36:23.663 00:36:23.663 ' 00:36:23.663 18:35:24 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:23.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:23.663 --rc genhtml_branch_coverage=1 00:36:23.663 --rc genhtml_function_coverage=1 00:36:23.663 --rc genhtml_legend=1 00:36:23.663 --rc geninfo_all_blocks=1 00:36:23.663 --rc geninfo_unexecuted_blocks=1 00:36:23.663 00:36:23.663 ' 00:36:23.663 18:35:24 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:23.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:23.663 --rc genhtml_branch_coverage=1 00:36:23.663 --rc genhtml_function_coverage=1 00:36:23.663 --rc genhtml_legend=1 00:36:23.663 --rc geninfo_all_blocks=1 00:36:23.663 --rc geninfo_unexecuted_blocks=1 00:36:23.663 00:36:23.663 ' 00:36:23.663 18:35:24 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:23.663 18:35:24 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:23.663 18:35:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.663 18:35:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.663 18:35:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.663 18:35:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:23.663 18:35:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:23.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:23.663 18:35:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:23.663 18:35:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:23.663 18:35:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:23.663 18:35:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:23.663 18:35:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.663 18:35:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:23.663 18:35:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:23.663 18:35:24 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:23.663 18:35:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:31.814 18:35:31 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:31.815 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:31.815 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:31.815 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:31.815 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:31.815 18:35:31 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:31.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:31.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:36:31.815 00:36:31.815 --- 10.0.0.2 ping statistics --- 00:36:31.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.815 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:31.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:31.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:36:31.815 00:36:31.815 --- 10.0.0.1 ping statistics --- 00:36:31.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:31.815 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:31.815 18:35:32 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:34.357 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:34.357 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:34.357 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:34.617 18:35:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:34.617 18:35:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:34.617 18:35:36 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:34.617 18:35:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2289765 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2289765 00:36:34.617 18:35:36 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:34.617 18:35:36 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2289765 ']' 00:36:34.617 18:35:36 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.617 18:35:36 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.617 18:35:36 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.617 18:35:36 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.617 18:35:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:34.877 [2024-11-19 18:35:36.126127] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:36:34.877 [2024-11-19 18:35:36.126186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.877 [2024-11-19 18:35:36.220245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.877 [2024-11-19 18:35:36.255535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.877 [2024-11-19 18:35:36.255567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.877 [2024-11-19 18:35:36.255576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.877 [2024-11-19 18:35:36.255582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.877 [2024-11-19 18:35:36.255588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.877 [2024-11-19 18:35:36.256144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.447 18:35:36 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:35.447 18:35:36 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:35.447 18:35:36 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:35.447 18:35:36 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:35.447 18:35:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:35.707 18:35:36 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:35.707 18:35:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:35.707 18:35:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:35.707 18:35:36 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.707 18:35:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:35.707 [2024-11-19 18:35:36.956789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.707 18:35:36 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.707 18:35:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:35.707 18:35:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:35.707 18:35:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:35.707 18:35:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:35.707 ************************************ 00:36:35.707 START TEST fio_dif_1_default 00:36:35.707 ************************************ 00:36:35.707 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:35.707 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:35.707 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:35.708 bdev_null0 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:35.708 [2024-11-19 18:35:37.045144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:35.708 { 00:36:35.708 "params": { 00:36:35.708 "name": "Nvme$subsystem", 00:36:35.708 "trtype": "$TEST_TRANSPORT", 00:36:35.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:35.708 "adrfam": "ipv4", 00:36:35.708 "trsvcid": "$NVMF_PORT", 00:36:35.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:35.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:35.708 "hdgst": ${hdgst:-false}, 00:36:35.708 "ddgst": ${ddgst:-false} 00:36:35.708 }, 00:36:35.708 "method": "bdev_nvme_attach_controller" 00:36:35.708 } 00:36:35.708 EOF 00:36:35.708 )") 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:35.708 "params": { 00:36:35.708 "name": "Nvme0", 00:36:35.708 "trtype": "tcp", 00:36:35.708 "traddr": "10.0.0.2", 00:36:35.708 "adrfam": "ipv4", 00:36:35.708 "trsvcid": "4420", 00:36:35.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:35.708 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:35.708 "hdgst": false, 00:36:35.708 "ddgst": false 00:36:35.708 }, 00:36:35.708 "method": "bdev_nvme_attach_controller" 00:36:35.708 }' 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:35.708 18:35:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.278 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:36.278 fio-3.35 00:36:36.278 Starting 1 thread 00:36:48.504 00:36:48.504 filename0: (groupid=0, jobs=1): err= 0: pid=2290291: Tue Nov 19 18:35:48 2024 00:36:48.504 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10004msec) 00:36:48.504 slat (nsec): min=5398, max=90476, avg=7616.34, stdev=4713.14 00:36:48.504 clat (usec): min=977, max=42933, avg=40810.70, stdev=3625.15 00:36:48.504 lat (usec): min=985, max=42941, avg=40818.32, stdev=3622.94 00:36:48.504 clat percentiles (usec): 00:36:48.504 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:48.504 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:48.504 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:36:48.504 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:48.504 | 99.99th=[42730] 00:36:48.504 bw ( KiB/s): min= 384, max= 416, per=99.53%, avg=390.74, stdev=13.40, samples=19 00:36:48.504 iops : min= 96, max= 104, avg=97.68, stdev= 3.35, samples=19 00:36:48.504 lat (usec) : 1000=0.31% 00:36:48.504 lat (msec) : 2=0.51%, 50=99.18% 00:36:48.504 cpu : usr=93.26%, sys=6.47%, ctx=15, majf=0, minf=426 00:36:48.504 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:48.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:48.504 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:48.504 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:48.504 00:36:48.504 Run status group 0 (all jobs): 00:36:48.504 READ: bw=392KiB/s (401kB/s), 392KiB/s-392KiB/s (401kB/s-401kB/s), io=3920KiB (4014kB), run=10004-10004msec 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 00:36:48.504 real 0m11.163s 00:36:48.504 user 0m17.304s 00:36:48.504 sys 0m1.619s 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 ************************************ 00:36:48.504 END TEST fio_dif_1_default 00:36:48.504 ************************************ 00:36:48.504 18:35:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:48.504 18:35:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:48.504 18:35:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 ************************************ 00:36:48.504 START TEST fio_dif_1_multi_subsystems 00:36:48.504 ************************************ 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 bdev_null0 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 [2024-11-19 18:35:48.287931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 bdev_null1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:48.504 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:48.504 { 00:36:48.504 "params": { 00:36:48.504 "name": "Nvme$subsystem", 00:36:48.504 "trtype": "$TEST_TRANSPORT", 00:36:48.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:48.505 "adrfam": "ipv4", 00:36:48.505 "trsvcid": "$NVMF_PORT", 00:36:48.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:48.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:48.505 "hdgst": ${hdgst:-false}, 00:36:48.505 "ddgst": ${ddgst:-false} 00:36:48.505 }, 00:36:48.505 "method": "bdev_nvme_attach_controller" 00:36:48.505 } 00:36:48.505 EOF 00:36:48.505 )") 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:48.505 { 00:36:48.505 "params": { 00:36:48.505 "name": "Nvme$subsystem", 00:36:48.505 "trtype": "$TEST_TRANSPORT", 00:36:48.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:48.505 "adrfam": "ipv4", 00:36:48.505 "trsvcid": "$NVMF_PORT", 00:36:48.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:48.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:48.505 "hdgst": ${hdgst:-false}, 00:36:48.505 "ddgst": ${ddgst:-false} 00:36:48.505 }, 00:36:48.505 "method": "bdev_nvme_attach_controller" 00:36:48.505 } 00:36:48.505 EOF 00:36:48.505 )") 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:48.505 "params": { 00:36:48.505 "name": "Nvme0", 00:36:48.505 "trtype": "tcp", 00:36:48.505 "traddr": "10.0.0.2", 00:36:48.505 "adrfam": "ipv4", 00:36:48.505 "trsvcid": "4420", 00:36:48.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.505 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:48.505 "hdgst": false, 00:36:48.505 "ddgst": false 00:36:48.505 }, 00:36:48.505 "method": "bdev_nvme_attach_controller" 00:36:48.505 },{ 00:36:48.505 "params": { 00:36:48.505 "name": "Nvme1", 00:36:48.505 "trtype": "tcp", 00:36:48.505 "traddr": "10.0.0.2", 00:36:48.505 "adrfam": "ipv4", 00:36:48.505 "trsvcid": "4420", 00:36:48.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:48.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:48.505 "hdgst": false, 00:36:48.505 "ddgst": false 00:36:48.505 }, 00:36:48.505 "method": "bdev_nvme_attach_controller" 00:36:48.505 }' 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:48.505 18:35:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:48.505 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:48.505 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:48.505 fio-3.35 00:36:48.505 Starting 2 threads 00:36:58.496 00:36:58.496 filename0: (groupid=0, jobs=1): err= 0: pid=2292700: Tue Nov 19 18:35:59 2024 00:36:58.496 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10035msec) 00:36:58.496 slat (nsec): min=5400, max=32349, avg=6142.77, stdev=1422.52 00:36:58.496 clat (usec): min=405, max=41762, avg=21064.83, stdev=20315.32 00:36:58.496 lat (usec): min=413, max=41794, avg=21070.97, stdev=20315.33 00:36:58.496 clat percentiles (usec): 00:36:58.496 | 1.00th=[ 498], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 676], 00:36:58.496 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[40633], 60.00th=[41157], 00:36:58.496 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:58.496 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:58.496 | 99.99th=[41681] 00:36:58.496 bw ( KiB/s): min= 672, max= 768, per=66.02%, avg=760.00, stdev=25.16, samples=20 00:36:58.496 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:36:58.496 lat (usec) : 500=1.37%, 750=47.06%, 1000=1.37% 00:36:58.496 lat (msec) : 50=50.21% 00:36:58.496 cpu : usr=95.85%, sys=3.91%, ctx=49, majf=0, minf=83 00:36:58.496 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.496 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.496 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:58.496 filename1: (groupid=0, jobs=1): err= 0: pid=2292701: Tue Nov 19 18:35:59 2024 00:36:58.496 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10008msec) 00:36:58.496 slat (nsec): min=5407, max=31849, avg=6395.07, stdev=1562.00 00:36:58.496 clat (usec): min=500, max=42017, avg=40661.41, stdev=3639.95 00:36:58.496 lat (usec): min=506, max=42049, avg=40667.80, stdev=3640.03 00:36:58.496 clat percentiles (usec): 00:36:58.496 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:36:58.496 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:58.496 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:58.496 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:36:58.496 | 99.99th=[42206] 00:36:58.496 bw ( KiB/s): min= 384, max= 448, per=34.05%, avg=392.00, stdev=17.60, samples=20 00:36:58.496 iops : min= 96, max= 112, avg=98.00, stdev= 4.40, samples=20 00:36:58.496 lat (usec) : 750=0.81% 00:36:58.496 lat (msec) : 50=99.19% 00:36:58.496 cpu : usr=96.02%, sys=3.77%, ctx=13, majf=0, minf=186 00:36:58.496 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.496 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.496 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:58.496 00:36:58.496 Run status group 0 (all jobs): 00:36:58.496 READ: bw=1151KiB/s (1179kB/s), 393KiB/s-759KiB/s (403kB/s-777kB/s), io=11.3MiB (11.8MB), run=10008-10035msec 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:58.496 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.497 00:36:58.497 real 0m11.654s 00:36:58.497 user 0m37.981s 00:36:58.497 sys 0m1.149s 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:58.497 18:35:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:58.497 ************************************ 00:36:58.497 END TEST fio_dif_1_multi_subsystems 00:36:58.497 ************************************ 00:36:58.497 18:35:59 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:58.497 18:35:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:58.497 18:35:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:58.497 18:35:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:58.757 ************************************ 00:36:58.757 START TEST fio_dif_rand_params 00:36:58.757 ************************************ 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.757 bdev_null0 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.757 18:35:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:58.757 [2024-11-19 18:36:00.025848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:58.757 18:36:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:58.757 { 00:36:58.757 "params": { 00:36:58.757 "name": "Nvme$subsystem", 00:36:58.757 "trtype": "$TEST_TRANSPORT", 00:36:58.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:58.757 "adrfam": "ipv4", 00:36:58.757 "trsvcid": "$NVMF_PORT", 00:36:58.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:58.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:58.758 "hdgst": ${hdgst:-false}, 00:36:58.758 "ddgst": ${ddgst:-false} 00:36:58.758 }, 00:36:58.758 "method": "bdev_nvme_attach_controller" 00:36:58.758 } 00:36:58.758 EOF 00:36:58.758 )") 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:58.758 "params": { 00:36:58.758 "name": "Nvme0", 00:36:58.758 "trtype": "tcp", 00:36:58.758 "traddr": "10.0.0.2", 00:36:58.758 "adrfam": "ipv4", 00:36:58.758 "trsvcid": "4420", 00:36:58.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.758 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.758 "hdgst": false, 00:36:58.758 "ddgst": false 00:36:58.758 }, 00:36:58.758 "method": "bdev_nvme_attach_controller" 00:36:58.758 }' 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:58.758 18:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.016 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:59.016 ... 00:36:59.016 fio-3.35 00:36:59.016 Starting 3 threads 00:37:05.583 00:37:05.583 filename0: (groupid=0, jobs=1): err= 0: pid=2294947: Tue Nov 19 18:36:06 2024 00:37:05.583 read: IOPS=326, BW=40.8MiB/s (42.8MB/s)(206MiB/5046msec) 00:37:05.583 slat (nsec): min=5502, max=74411, avg=8437.45, stdev=2502.01 00:37:05.583 clat (usec): min=4689, max=48744, avg=9159.48, stdev=3822.91 00:37:05.583 lat (usec): min=4698, max=48751, avg=9167.91, stdev=3822.85 00:37:05.583 clat percentiles (usec): 00:37:05.583 | 1.00th=[ 5276], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7373], 00:37:05.583 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9503], 00:37:05.583 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10814], 95.00th=[11207], 00:37:05.583 | 99.00th=[12125], 99.50th=[46400], 99.90th=[47449], 99.95th=[48497], 00:37:05.583 | 99.99th=[48497] 00:37:05.583 bw ( KiB/s): min=37888, max=48128, per=34.21%, avg=42086.40, stdev=2749.80, samples=10 00:37:05.583 iops : min= 296, max= 376, avg=328.80, stdev=21.48, samples=10 00:37:05.583 lat (msec) : 10=71.20%, 20=27.95%, 50=0.85% 00:37:05.583 cpu : usr=94.51%, sys=5.21%, ctx=11, majf=0, minf=199 00:37:05.583 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.583 issued rwts: total=1646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.583 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:05.583 filename0: (groupid=0, jobs=1): err= 0: pid=2294948: Tue Nov 19 18:36:06 2024 00:37:05.583 read: IOPS=312, BW=39.0MiB/s (40.9MB/s)(195MiB/5004msec) 00:37:05.583 slat (nsec): min=5405, max=46912, avg=7963.78, stdev=2068.52 00:37:05.583 clat (usec): min=4959, max=51338, avg=9599.83, stdev=5460.42 00:37:05.583 lat (usec): min=4967, max=51348, avg=9607.80, stdev=5460.52 00:37:05.583 clat percentiles (usec): 00:37:05.583 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7439], 00:37:05.583 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9634], 00:37:05.583 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11469], 00:37:05.583 | 99.00th=[49021], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:37:05.583 | 99.99th=[51119] 00:37:05.583 bw ( KiB/s): min=32768, max=43520, per=32.46%, avg=39936.00, stdev=3359.58, samples=10 00:37:05.583 iops : min= 256, max= 340, avg=312.00, stdev=26.25, samples=10 00:37:05.583 lat (msec) : 10=68.57%, 20=29.71%, 50=1.47%, 100=0.26% 00:37:05.583 cpu : usr=94.58%, sys=5.14%, ctx=10, majf=0, minf=98 00:37:05.583 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.583 issued rwts: total=1562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.583 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:05.583 filename0: (groupid=0, jobs=1): err= 0: pid=2294949: Tue Nov 19 18:36:06 2024 00:37:05.583 read: IOPS=325, BW=40.7MiB/s (42.7MB/s)(205MiB/5043msec) 00:37:05.583 slat (nsec): min=5418, max=34818, avg=7801.36, stdev=1812.73 00:37:05.583 clat (usec): min=3885, max=89368, avg=9153.57, stdev=8853.01 00:37:05.583 lat (usec): min=3891, max=89377, avg=9161.37, stdev=8853.15 00:37:05.583 clat percentiles (usec): 00:37:05.583 | 1.00th=[ 4883], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6456], 00:37:05.583 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7570], 00:37:05.583 | 70.00th=[ 7963], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[10159], 00:37:05.583 | 99.00th=[47973], 99.50th=[49021], 99.90th=[87557], 99.95th=[89654], 00:37:05.583 | 99.99th=[89654] 00:37:05.583 bw ( KiB/s): min=27392, max=50944, per=34.15%, avg=42009.60, stdev=6529.48, samples=10 00:37:05.583 iops : min= 214, max= 398, avg=328.20, stdev=51.01, samples=10 00:37:05.583 lat (msec) : 4=0.06%, 10=94.64%, 20=0.85%, 50=4.14%, 100=0.30% 00:37:05.583 cpu : usr=95.89%, sys=3.83%, ctx=9, majf=0, minf=111 00:37:05.583 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:05.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:05.583 issued rwts: total=1642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:05.583 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:05.583 00:37:05.583 Run status group 0 (all jobs): 00:37:05.583 READ: bw=120MiB/s (126MB/s), 39.0MiB/s-40.8MiB/s (40.9MB/s-42.8MB/s), io=606MiB (636MB), run=5004-5046msec 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.583 bdev_null0 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.583 [2024-11-19 18:36:06.384720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.583 bdev_null1 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:05.583 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.584 bdev_null2 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:05.584 { 00:37:05.584 "params": { 00:37:05.584 "name": "Nvme$subsystem", 00:37:05.584 "trtype": "$TEST_TRANSPORT", 00:37:05.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.584 "adrfam": "ipv4", 00:37:05.584 "trsvcid": "$NVMF_PORT", 00:37:05.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.584 "hdgst": ${hdgst:-false}, 00:37:05.584 "ddgst": ${ddgst:-false} 00:37:05.584 }, 00:37:05.584 "method": "bdev_nvme_attach_controller" 00:37:05.584 } 00:37:05.584 EOF 00:37:05.584 )") 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:05.584 { 00:37:05.584 "params": { 00:37:05.584 "name": "Nvme$subsystem", 00:37:05.584 "trtype": "$TEST_TRANSPORT", 00:37:05.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.584 "adrfam": "ipv4", 00:37:05.584 "trsvcid": "$NVMF_PORT", 00:37:05.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.584 "hdgst": ${hdgst:-false}, 00:37:05.584 "ddgst": ${ddgst:-false} 00:37:05.584 }, 00:37:05.584 "method": "bdev_nvme_attach_controller" 00:37:05.584 } 00:37:05.584 EOF 00:37:05.584 )") 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:05.584 { 00:37:05.584 "params": { 00:37:05.584 "name": "Nvme$subsystem", 00:37:05.584 "trtype": "$TEST_TRANSPORT", 00:37:05.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:05.584 "adrfam": "ipv4", 00:37:05.584 "trsvcid": "$NVMF_PORT", 00:37:05.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:05.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:05.584 "hdgst": ${hdgst:-false}, 00:37:05.584 "ddgst": ${ddgst:-false} 00:37:05.584 }, 00:37:05.584 "method": "bdev_nvme_attach_controller" 00:37:05.584 } 00:37:05.584 EOF 00:37:05.584 )") 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:05.584 "params": { 00:37:05.584 "name": "Nvme0", 00:37:05.584 "trtype": "tcp", 00:37:05.584 "traddr": "10.0.0.2", 00:37:05.584 "adrfam": "ipv4", 00:37:05.584 "trsvcid": "4420", 00:37:05.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:05.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:05.584 "hdgst": false, 00:37:05.584 "ddgst": false 00:37:05.584 }, 00:37:05.584 "method": "bdev_nvme_attach_controller" 00:37:05.584 },{ 00:37:05.584 "params": { 00:37:05.584 "name": "Nvme1", 00:37:05.584 "trtype": "tcp", 00:37:05.584 "traddr": "10.0.0.2", 00:37:05.584 "adrfam": "ipv4", 00:37:05.584 "trsvcid": "4420", 00:37:05.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:05.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:05.584 "hdgst": false, 00:37:05.584 "ddgst": false 00:37:05.584 }, 00:37:05.584 "method": "bdev_nvme_attach_controller" 00:37:05.584 },{ 00:37:05.584 "params": { 00:37:05.584 "name": "Nvme2", 00:37:05.584 "trtype": "tcp", 00:37:05.584 "traddr": "10.0.0.2", 00:37:05.584 "adrfam": "ipv4", 00:37:05.584 "trsvcid": "4420", 00:37:05.584 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:05.584 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:05.584 "hdgst": false, 00:37:05.584 "ddgst": false 00:37:05.584 }, 00:37:05.584 "method": "bdev_nvme_attach_controller" 00:37:05.584 }' 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:05.584 18:36:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.584 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:05.584 ... 00:37:05.585 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:05.585 ... 00:37:05.585 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:05.585 ... 00:37:05.585 fio-3.35 00:37:05.585 Starting 24 threads 00:37:17.811 00:37:17.811 filename0: (groupid=0, jobs=1): err= 0: pid=2296513: Tue Nov 19 18:36:17 2024 00:37:17.811 read: IOPS=695, BW=2784KiB/s (2851kB/s)(27.2MiB/10015msec) 00:37:17.811 slat (nsec): min=5559, max=75227, avg=10150.78, stdev=6602.77 00:37:17.811 clat (usec): min=762, max=27795, avg=22904.62, stdev=4274.05 00:37:17.811 lat (usec): min=773, max=27805, avg=22914.77, stdev=4272.65 00:37:17.811 clat percentiles (usec): 00:37:17.811 | 1.00th=[ 1385], 5.00th=[19530], 10.00th=[22938], 20.00th=[23462], 00:37:17.811 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:17.811 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.811 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26608], 99.95th=[26608], 00:37:17.811 | 99.99th=[27919] 00:37:17.811 bw ( KiB/s): min= 2560, max= 4688, per=4.30%, avg=2792.95, stdev=460.80, samples=19 00:37:17.811 iops : min= 640, max= 1172, avg=698.21, stdev=115.20, samples=19 00:37:17.811 lat (usec) : 1000=0.10% 00:37:17.811 lat (msec) : 2=2.77%, 4=0.49%, 10=0.46%, 20=1.21%, 50=94.98% 00:37:17.811 cpu : usr=98.59%, sys=0.90%, ctx=127, majf=0, minf=65 00:37:17.811 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:17.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.811 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.811 issued rwts: total=6970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.811 filename0: (groupid=0, jobs=1): err= 0: pid=2296514: Tue Nov 19 18:36:17 2024 00:37:17.811 read: IOPS=674, BW=2697KiB/s (2761kB/s)(26.4MiB/10015msec) 00:37:17.811 slat (nsec): min=5452, max=93789, avg=19060.52, stdev=16644.32 00:37:17.811 clat (usec): min=8722, max=26476, avg=23580.14, stdev=1597.30 00:37:17.811 lat (usec): min=8735, max=26485, avg=23599.20, stdev=1597.51 00:37:17.811 clat percentiles (usec): 00:37:17.811 | 1.00th=[13173], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:17.811 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:17.811 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.811 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26346], 99.95th=[26346], 00:37:17.811 | 99.99th=[26346] 00:37:17.811 bw ( KiB/s): min= 2560, max= 2944, per=4.16%, avg=2701.16, stdev=72.67, samples=19 00:37:17.811 iops : min= 640, max= 736, avg=675.26, stdev=18.17, samples=19 00:37:17.811 lat (msec) : 10=0.47%, 20=1.18%, 50=98.34% 00:37:17.811 cpu : usr=98.85%, sys=0.85%, ctx=21, majf=0, minf=73 00:37:17.811 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:17.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.811 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.811 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.811 filename0: (groupid=0, jobs=1): err= 0: pid=2296515: Tue Nov 19 18:36:17 2024 00:37:17.811 read: IOPS=669, BW=2679KiB/s (2744kB/s)(26.2MiB/10008msec) 00:37:17.811 slat (nsec): min=5560, max=95558, avg=16180.61, stdev=13757.09 00:37:17.811 clat (usec): min=15377, max=27345, avg=23740.58, stdev=834.81 00:37:17.811 lat (usec): min=15394, max=27362, avg=23756.77, stdev=834.96 00:37:17.811 clat percentiles (usec): 00:37:17.811 | 1.00th=[21890], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:17.811 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:17.811 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.811 | 99.00th=[25560], 99.50th=[25822], 99.90th=[27395], 99.95th=[27395], 00:37:17.811 | 99.99th=[27395] 00:37:17.811 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2681.26, stdev=29.37, samples=19 00:37:17.811 iops : min= 640, max= 672, avg=670.32, stdev= 7.34, samples=19 00:37:17.811 lat (msec) : 20=0.48%, 50=99.52% 00:37:17.811 cpu : usr=99.03%, sys=0.69%, ctx=15, majf=0, minf=53 00:37:17.811 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:17.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.811 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.811 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.811 filename0: (groupid=0, jobs=1): err= 0: pid=2296516: Tue Nov 19 18:36:17 2024 00:37:17.811 read: IOPS=676, BW=2706KiB/s (2771kB/s)(26.4MiB/10007msec) 00:37:17.811 slat (nsec): min=5570, max=99169, avg=17370.37, stdev=16363.53 00:37:17.811 clat (usec): min=5679, max=39491, avg=23507.28, stdev=2118.35 00:37:17.811 lat (usec): min=5690, max=39497, avg=23524.65, stdev=2118.45 00:37:17.811 clat percentiles (usec): 00:37:17.811 | 1.00th=[12649], 5.00th=[22414], 10.00th=[22938], 20.00th=[23200], 00:37:17.811 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:17.811 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.811 | 99.00th=[26608], 99.50th=[30016], 99.90th=[38536], 99.95th=[38536], 00:37:17.811 | 99.99th=[39584] 00:37:17.811 bw ( KiB/s): min= 2560, max= 3344, per=4.17%, avg=2708.74, stdev=158.98, samples=19 00:37:17.812 iops : min= 640, max= 836, avg=677.16, stdev=39.75, samples=19 00:37:17.812 lat (msec) : 10=0.55%, 20=3.25%, 50=96.20% 00:37:17.812 cpu : usr=99.26%, sys=0.44%, ctx=14, majf=0, minf=44 00:37:17.812 IO depths : 1=5.8%, 2=11.6%, 4=24.1%, 8=51.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:17.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 issued rwts: total=6770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.812 filename0: (groupid=0, jobs=1): err= 0: pid=2296517: Tue Nov 19 18:36:17 2024 00:37:17.812 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10003msec) 00:37:17.812 slat (nsec): min=5561, max=95360, avg=23165.65, stdev=16355.54 00:37:17.812 clat (usec): min=9348, max=40133, avg=23607.46, stdev=2086.02 00:37:17.812 lat (usec): min=9354, max=40158, avg=23630.63, stdev=2086.58 00:37:17.812 clat percentiles (usec): 00:37:17.812 | 1.00th=[14877], 5.00th=[22152], 10.00th=[22938], 20.00th=[23200], 00:37:17.812 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.812 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.812 | 99.00th=[31065], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:37:17.812 | 99.99th=[40109] 00:37:17.812 bw ( KiB/s): min= 2560, max= 2832, per=4.12%, avg=2677.89, stdev=71.20, samples=19 00:37:17.812 iops : min= 640, max= 708, avg=669.47, stdev=17.80, samples=19 00:37:17.812 lat (msec) : 10=0.21%, 20=2.83%, 50=96.96% 00:37:17.812 cpu : usr=98.66%, sys=0.89%, ctx=63, majf=0, minf=52 00:37:17.812 IO depths : 1=5.6%, 2=11.4%, 4=23.4%, 8=52.6%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:17.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.812 filename0: (groupid=0, jobs=1): err= 0: pid=2296518: Tue Nov 19 18:36:17 2024 00:37:17.812 read: IOPS=672, BW=2690KiB/s (2755kB/s)(26.3MiB/10015msec) 00:37:17.812 slat (usec): min=5, max=125, avg=14.01, stdev=11.74 00:37:17.812 clat (usec): min=9081, max=31889, avg=23661.37, stdev=1396.03 00:37:17.812 lat (usec): min=9103, max=31898, avg=23675.39, stdev=1393.44 00:37:17.812 clat percentiles (usec): 00:37:17.812 | 1.00th=[15401], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:37:17.812 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:37:17.812 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.812 | 99.00th=[25560], 99.50th=[26084], 99.90th=[28443], 99.95th=[29492], 00:37:17.812 | 99.99th=[31851] 00:37:17.812 bw ( KiB/s): min= 2560, max= 2944, per=4.15%, avg=2694.42, stdev=67.15, samples=19 00:37:17.812 iops : min= 640, max= 736, avg=673.58, stdev=16.79, samples=19 00:37:17.812 lat (msec) : 10=0.27%, 20=1.04%, 50=98.69% 00:37:17.812 cpu : usr=98.87%, sys=0.77%, ctx=67, majf=0, minf=48 00:37:17.812 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:17.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.812 filename0: (groupid=0, jobs=1): err= 0: pid=2296519: Tue Nov 19 18:36:17 2024 00:37:17.812 read: IOPS=683, BW=2735KiB/s (2800kB/s)(26.7MiB/10009msec) 00:37:17.812 slat (nsec): min=5396, max=97759, avg=17423.47, stdev=16038.20 00:37:17.812 clat (usec): min=9385, max=45280, avg=23301.55, stdev=4506.77 00:37:17.812 lat (usec): min=9391, max=45297, avg=23318.97, stdev=4507.94 00:37:17.812 clat percentiles (usec): 00:37:17.812 | 1.00th=[11863], 5.00th=[14615], 10.00th=[16909], 20.00th=[21103], 00:37:17.812 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.812 | 70.00th=[23987], 80.00th=[24511], 90.00th=[28181], 95.00th=[32375], 00:37:17.812 | 99.00th=[35914], 99.50th=[36439], 99.90th=[38536], 99.95th=[45351], 00:37:17.812 | 99.99th=[45351] 00:37:17.812 bw ( KiB/s): min= 2352, max= 2976, per=4.21%, avg=2734.11, stdev=152.43, samples=19 00:37:17.812 iops : min= 588, max= 744, avg=683.47, stdev=38.12, samples=19 00:37:17.812 lat (msec) : 10=0.15%, 20=16.26%, 50=83.59% 00:37:17.812 cpu : usr=98.94%, sys=0.75%, ctx=20, majf=0, minf=37 00:37:17.812 IO depths : 1=0.6%, 2=1.2%, 4=7.4%, 8=76.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:37:17.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 complete : 0=0.0%, 4=88.6%, 8=8.1%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 issued rwts: total=6843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.812 filename0: (groupid=0, jobs=1): err= 0: pid=2296520: Tue Nov 19 18:36:17 2024 00:37:17.812 read: IOPS=668, BW=2672KiB/s (2736kB/s)(26.1MiB/10002msec) 00:37:17.812 slat (nsec): min=5575, max=93834, avg=24491.16, stdev=17016.73 00:37:17.812 clat (usec): min=14380, max=34450, avg=23734.24, stdev=1750.05 00:37:17.812 lat (usec): min=14392, max=34497, avg=23758.73, stdev=1750.79 00:37:17.812 clat percentiles (usec): 00:37:17.812 | 1.00th=[17433], 5.00th=[22414], 10.00th=[22938], 20.00th=[23200], 00:37:17.812 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.812 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:37:17.812 | 99.00th=[30802], 99.50th=[32375], 99.90th=[34341], 99.95th=[34341], 00:37:17.812 | 99.99th=[34341] 00:37:17.812 bw ( KiB/s): min= 2560, max= 2928, per=4.12%, avg=2671.68, stdev=89.35, samples=19 00:37:17.812 iops : min= 640, max= 732, avg=667.89, stdev=22.34, samples=19 00:37:17.812 lat (msec) : 20=2.80%, 50=97.20% 00:37:17.812 cpu : usr=99.10%, sys=0.59%, ctx=68, majf=0, minf=68 00:37:17.812 IO depths : 1=5.6%, 2=11.5%, 4=23.8%, 8=52.1%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:17.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 issued rwts: total=6682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.812 filename1: (groupid=0, jobs=1): err= 0: pid=2296521: Tue Nov 19 18:36:17 2024 00:37:17.812 read: IOPS=674, BW=2697KiB/s (2761kB/s)(26.4MiB/10015msec) 00:37:17.812 slat (usec): min=5, max=116, avg=15.58, stdev=12.94 00:37:17.812 clat (usec): min=8688, max=27574, avg=23606.22, stdev=1594.39 00:37:17.812 lat (usec): min=8707, max=27593, avg=23621.80, stdev=1592.01 00:37:17.812 clat percentiles (usec): 00:37:17.812 | 1.00th=[14091], 5.00th=[22676], 10.00th=[22938], 20.00th=[23462], 00:37:17.812 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:17.812 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.812 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26346], 99.95th=[26346], 00:37:17.812 | 99.99th=[27657] 00:37:17.812 bw ( KiB/s): min= 2560, max= 2944, per=4.16%, avg=2701.16, stdev=72.08, samples=19 00:37:17.812 iops : min= 640, max= 736, avg=675.26, stdev=17.98, samples=19 00:37:17.812 lat (msec) : 10=0.47%, 20=1.21%, 50=98.31% 00:37:17.812 cpu : usr=99.03%, sys=0.64%, ctx=69, majf=0, minf=67 00:37:17.812 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:17.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.812 filename1: (groupid=0, jobs=1): err= 0: pid=2296522: Tue Nov 19 18:36:17 2024 00:37:17.812 read: IOPS=669, BW=2679KiB/s (2743kB/s)(26.2MiB/10009msec) 00:37:17.812 slat (nsec): min=5550, max=86596, avg=17408.29, stdev=12100.96 00:37:17.812 clat (usec): min=9176, max=36100, avg=23726.37, stdev=1567.91 00:37:17.812 lat (usec): min=9182, max=36106, avg=23743.78, stdev=1568.31 00:37:17.812 clat percentiles (usec): 00:37:17.812 | 1.00th=[17433], 5.00th=[22676], 10.00th=[22938], 20.00th=[23200], 00:37:17.812 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:37:17.812 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.812 | 99.00th=[30016], 99.50th=[31327], 99.90th=[34866], 99.95th=[35914], 00:37:17.812 | 99.99th=[35914] 00:37:17.812 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2673.89, stdev=40.18, samples=19 00:37:17.812 iops : min= 640, max= 672, avg=668.42, stdev=10.04, samples=19 00:37:17.812 lat (msec) : 10=0.10%, 20=1.64%, 50=98.25% 00:37:17.812 cpu : usr=98.71%, sys=0.82%, ctx=90, majf=0, minf=40 00:37:17.812 IO depths : 1=5.8%, 2=11.9%, 4=24.6%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:17.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.812 filename1: (groupid=0, jobs=1): err= 0: pid=2296523: Tue Nov 19 18:36:17 2024 00:37:17.812 read: IOPS=672, BW=2691KiB/s (2755kB/s)(26.3MiB/10022msec) 00:37:17.812 slat (nsec): min=5559, max=99831, avg=16310.96, stdev=11949.83 00:37:17.812 clat (usec): min=8933, max=38845, avg=23626.75, stdev=1599.10 00:37:17.812 lat (usec): min=8950, max=38852, avg=23643.06, stdev=1597.69 00:37:17.812 clat percentiles (usec): 00:37:17.812 | 1.00th=[14222], 5.00th=[22676], 10.00th=[22938], 20.00th=[23200], 00:37:17.812 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:37:17.812 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.812 | 99.00th=[25297], 99.50th=[26608], 99.90th=[36963], 99.95th=[39060], 00:37:17.812 | 99.99th=[39060] 00:37:17.812 bw ( KiB/s): min= 2560, max= 2944, per=4.14%, avg=2690.10, stdev=68.16, samples=20 00:37:17.812 iops : min= 640, max= 736, avg=672.50, stdev=17.04, samples=20 00:37:17.812 lat (msec) : 10=0.47%, 20=1.07%, 50=98.46% 00:37:17.812 cpu : usr=99.04%, sys=0.63%, ctx=35, majf=0, minf=52 00:37:17.812 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:17.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.812 issued rwts: total=6742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.812 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.812 filename1: (groupid=0, jobs=1): err= 0: pid=2296524: Tue Nov 19 18:36:17 2024 00:37:17.812 read: IOPS=704, BW=2817KiB/s (2884kB/s)(27.6MiB/10022msec) 00:37:17.812 slat (usec): min=5, max=126, avg=18.74, stdev=17.04 00:37:17.813 clat (usec): min=8622, max=41261, avg=22574.38, stdev=4264.85 00:37:17.813 lat (usec): min=8633, max=41270, avg=22593.12, stdev=4267.13 00:37:17.813 clat percentiles (usec): 00:37:17.813 | 1.00th=[12518], 5.00th=[15139], 10.00th=[16909], 20.00th=[19006], 00:37:17.813 | 30.00th=[22414], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:37:17.813 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25297], 95.00th=[29230], 00:37:17.813 | 99.00th=[37487], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:37:17.813 | 99.99th=[41157] 00:37:17.813 bw ( KiB/s): min= 2640, max= 3128, per=4.34%, avg=2818.10, stdev=134.42, samples=20 00:37:17.813 iops : min= 660, max= 782, avg=704.50, stdev=33.58, samples=20 00:37:17.813 lat (msec) : 10=0.45%, 20=22.72%, 50=76.83% 00:37:17.813 cpu : usr=98.93%, sys=0.77%, ctx=17, majf=0, minf=43 00:37:17.813 IO depths : 1=2.1%, 2=4.3%, 4=11.4%, 8=70.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:37:17.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 issued rwts: total=7057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.813 filename1: (groupid=0, jobs=1): err= 0: pid=2296525: Tue Nov 19 18:36:17 2024 00:37:17.813 read: IOPS=672, BW=2691KiB/s (2756kB/s)(26.3MiB/10004msec) 00:37:17.813 slat (nsec): min=5555, max=93619, avg=21046.21, stdev=16461.21 00:37:17.813 clat (usec): min=4982, max=41038, avg=23606.77, stdev=2934.39 00:37:17.813 lat (usec): min=4988, max=41057, avg=23627.81, stdev=2934.98 00:37:17.813 clat percentiles (usec): 00:37:17.813 | 1.00th=[13829], 5.00th=[18744], 10.00th=[22676], 20.00th=[23200], 00:37:17.813 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.813 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[26346], 00:37:17.813 | 99.00th=[35914], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:37:17.813 | 99.99th=[41157] 00:37:17.813 bw ( KiB/s): min= 2560, max= 2832, per=4.12%, avg=2675.63, stdev=70.98, samples=19 00:37:17.813 iops : min= 640, max= 708, avg=668.89, stdev=17.77, samples=19 00:37:17.813 lat (msec) : 10=0.21%, 20=5.74%, 50=94.06% 00:37:17.813 cpu : usr=98.65%, sys=0.87%, ctx=70, majf=0, minf=89 00:37:17.813 IO depths : 1=3.5%, 2=7.7%, 4=17.6%, 8=60.9%, 16=10.3%, 32=0.0%, >=64=0.0% 00:37:17.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 complete : 0=0.0%, 4=92.5%, 8=3.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 issued rwts: total=6730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.813 filename1: (groupid=0, jobs=1): err= 0: pid=2296526: Tue Nov 19 18:36:17 2024 00:37:17.813 read: IOPS=668, BW=2672KiB/s (2736kB/s)(26.1MiB/10011msec) 00:37:17.813 slat (nsec): min=5592, max=97092, avg=18624.73, stdev=13932.50 00:37:17.813 clat (usec): min=10875, max=35207, avg=23783.96, stdev=1573.95 00:37:17.813 lat (usec): min=10886, max=35226, avg=23802.58, stdev=1574.22 00:37:17.813 clat percentiles (usec): 00:37:17.813 | 1.00th=[18220], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:17.813 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.813 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.813 | 99.00th=[29492], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:37:17.813 | 99.99th=[35390] 00:37:17.813 bw ( KiB/s): min= 2560, max= 2693, per=4.12%, avg=2674.79, stdev=40.47, samples=19 00:37:17.813 iops : min= 640, max= 673, avg=668.68, stdev=10.11, samples=19 00:37:17.813 lat (msec) : 20=1.29%, 50=98.71% 00:37:17.813 cpu : usr=98.93%, sys=0.75%, ctx=38, majf=0, minf=51 00:37:17.813 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:17.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.813 filename1: (groupid=0, jobs=1): err= 0: pid=2296527: Tue Nov 19 18:36:17 2024 00:37:17.813 read: IOPS=676, BW=2707KiB/s (2772kB/s)(26.4MiB/10005msec) 00:37:17.813 slat (nsec): min=5554, max=93956, avg=21032.09, stdev=12923.82 00:37:17.813 clat (usec): min=5456, max=41523, avg=23459.71, stdev=2495.12 00:37:17.813 lat (usec): min=5462, max=41540, avg=23480.74, stdev=2496.69 00:37:17.813 clat percentiles (usec): 00:37:17.813 | 1.00th=[15008], 5.00th=[19006], 10.00th=[22676], 20.00th=[23200], 00:37:17.813 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.813 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.813 | 99.00th=[31851], 99.50th=[35914], 99.90th=[41681], 99.95th=[41681], 00:37:17.813 | 99.99th=[41681] 00:37:17.813 bw ( KiB/s): min= 2560, max= 2832, per=4.14%, avg=2685.74, stdev=61.01, samples=19 00:37:17.813 iops : min= 640, max= 708, avg=671.42, stdev=15.28, samples=19 00:37:17.813 lat (msec) : 10=0.21%, 20=5.35%, 50=94.45% 00:37:17.813 cpu : usr=98.84%, sys=0.86%, ctx=22, majf=0, minf=62 00:37:17.813 IO depths : 1=5.3%, 2=10.8%, 4=22.5%, 8=54.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:17.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 issued rwts: total=6770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.813 filename1: (groupid=0, jobs=1): err= 0: pid=2296528: Tue Nov 19 18:36:17 2024 00:37:17.813 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10004msec) 00:37:17.813 slat (nsec): min=5552, max=91736, avg=15166.71, stdev=12298.32 00:37:17.813 clat (usec): min=8425, max=54912, avg=24028.41, stdev=4274.45 00:37:17.813 lat (usec): min=8431, max=54928, avg=24043.57, stdev=4275.25 00:37:17.813 clat percentiles (usec): 00:37:17.813 | 1.00th=[14484], 5.00th=[17171], 10.00th=[19006], 20.00th=[21627], 00:37:17.813 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:37:17.813 | 70.00th=[24249], 80.00th=[25297], 90.00th=[29492], 95.00th=[32113], 00:37:17.813 | 99.00th=[38011], 99.50th=[39060], 99.90th=[41157], 99.95th=[54789], 00:37:17.813 | 99.99th=[54789] 00:37:17.813 bw ( KiB/s): min= 2436, max= 2848, per=4.09%, avg=2655.37, stdev=117.34, samples=19 00:37:17.813 iops : min= 609, max= 712, avg=663.84, stdev=29.33, samples=19 00:37:17.813 lat (msec) : 10=0.24%, 20=12.44%, 50=87.24%, 100=0.08% 00:37:17.813 cpu : usr=99.03%, sys=0.69%, ctx=15, majf=0, minf=49 00:37:17.813 IO depths : 1=1.1%, 2=2.2%, 4=7.9%, 8=75.2%, 16=13.7%, 32=0.0%, >=64=0.0% 00:37:17.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 complete : 0=0.0%, 4=89.7%, 8=6.9%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 issued rwts: total=6638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.813 filename2: (groupid=0, jobs=1): err= 0: pid=2296529: Tue Nov 19 18:36:17 2024 00:37:17.813 read: IOPS=671, BW=2688KiB/s (2752kB/s)(26.3MiB/10004msec) 00:37:17.813 slat (nsec): min=5345, max=84144, avg=20836.44, stdev=13470.83 00:37:17.813 clat (usec): min=5024, max=49290, avg=23649.21, stdev=2248.82 00:37:17.813 lat (usec): min=5031, max=49305, avg=23670.05, stdev=2249.91 00:37:17.813 clat percentiles (usec): 00:37:17.813 | 1.00th=[15926], 5.00th=[22152], 10.00th=[22938], 20.00th=[23200], 00:37:17.813 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.813 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:37:17.813 | 99.00th=[30802], 99.50th=[35390], 99.90th=[41681], 99.95th=[41681], 00:37:17.813 | 99.99th=[49546] 00:37:17.813 bw ( KiB/s): min= 2560, max= 2784, per=4.12%, avg=2675.37, stdev=58.81, samples=19 00:37:17.813 iops : min= 640, max= 696, avg=668.84, stdev=14.70, samples=19 00:37:17.813 lat (msec) : 10=0.24%, 20=3.85%, 50=95.91% 00:37:17.813 cpu : usr=98.87%, sys=0.82%, ctx=60, majf=0, minf=86 00:37:17.813 IO depths : 1=3.4%, 2=8.0%, 4=19.0%, 8=59.3%, 16=10.3%, 32=0.0%, >=64=0.0% 00:37:17.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 complete : 0=0.0%, 4=92.9%, 8=2.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 issued rwts: total=6722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.813 filename2: (groupid=0, jobs=1): err= 0: pid=2296530: Tue Nov 19 18:36:17 2024 00:37:17.813 read: IOPS=699, BW=2800KiB/s (2867kB/s)(27.4MiB/10003msec) 00:37:17.813 slat (nsec): min=5389, max=92853, avg=16682.22, stdev=14352.96 00:37:17.813 clat (usec): min=4990, max=40624, avg=22731.10, stdev=4084.62 00:37:17.813 lat (usec): min=4997, max=40640, avg=22747.78, stdev=4086.53 00:37:17.813 clat percentiles (usec): 00:37:17.813 | 1.00th=[13304], 5.00th=[15008], 10.00th=[16581], 20.00th=[20055], 00:37:17.813 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:37:17.813 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[29492], 00:37:17.813 | 99.00th=[34866], 99.50th=[37487], 99.90th=[40633], 99.95th=[40633], 00:37:17.813 | 99.99th=[40633] 00:37:17.813 bw ( KiB/s): min= 2552, max= 3232, per=4.29%, avg=2787.05, stdev=187.86, samples=19 00:37:17.813 iops : min= 638, max= 808, avg=696.74, stdev=47.00, samples=19 00:37:17.813 lat (msec) : 10=0.37%, 20=19.34%, 50=80.29% 00:37:17.813 cpu : usr=98.99%, sys=0.72%, ctx=26, majf=0, minf=60 00:37:17.813 IO depths : 1=2.3%, 2=5.1%, 4=14.1%, 8=67.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:37:17.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 complete : 0=0.0%, 4=91.4%, 8=4.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.813 issued rwts: total=7002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.813 filename2: (groupid=0, jobs=1): err= 0: pid=2296531: Tue Nov 19 18:36:17 2024 00:37:17.813 read: IOPS=670, BW=2682KiB/s (2746kB/s)(26.2MiB/10022msec) 00:37:17.813 slat (nsec): min=5586, max=97729, avg=23807.21, stdev=16426.47 00:37:17.813 clat (usec): min=8831, max=33505, avg=23648.17, stdev=1653.99 00:37:17.813 lat (usec): min=8854, max=33528, avg=23671.98, stdev=1652.54 00:37:17.813 clat percentiles (usec): 00:37:17.813 | 1.00th=[14222], 5.00th=[22676], 10.00th=[22938], 20.00th=[23200], 00:37:17.813 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.813 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.813 | 99.00th=[28967], 99.50th=[30278], 99.90th=[33424], 99.95th=[33424], 00:37:17.813 | 99.99th=[33424] 00:37:17.813 bw ( KiB/s): min= 2560, max= 2944, per=4.13%, avg=2681.30, stdev=77.40, samples=20 00:37:17.813 iops : min= 640, max= 736, avg=670.30, stdev=19.35, samples=20 00:37:17.813 lat (msec) : 10=0.42%, 20=0.80%, 50=98.78% 00:37:17.813 cpu : usr=98.64%, sys=0.86%, ctx=158, majf=0, minf=58 00:37:17.814 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:17.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.814 filename2: (groupid=0, jobs=1): err= 0: pid=2296532: Tue Nov 19 18:36:17 2024 00:37:17.814 read: IOPS=674, BW=2697KiB/s (2762kB/s)(26.4MiB/10011msec) 00:37:17.814 slat (nsec): min=5560, max=97283, avg=15423.33, stdev=13042.92 00:37:17.814 clat (usec): min=11008, max=42468, avg=23619.75, stdev=4521.66 00:37:17.814 lat (usec): min=11016, max=42499, avg=23635.18, stdev=4523.11 00:37:17.814 clat percentiles (usec): 00:37:17.814 | 1.00th=[13698], 5.00th=[16057], 10.00th=[17695], 20.00th=[20579], 00:37:17.814 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:37:17.814 | 70.00th=[24249], 80.00th=[24773], 90.00th=[28967], 95.00th=[33424], 00:37:17.814 | 99.00th=[38536], 99.50th=[39584], 99.90th=[40633], 99.95th=[42206], 00:37:17.814 | 99.99th=[42730] 00:37:17.814 bw ( KiB/s): min= 2304, max= 2928, per=4.15%, avg=2696.26, stdev=149.61, samples=19 00:37:17.814 iops : min= 576, max= 732, avg=674.05, stdev=37.41, samples=19 00:37:17.814 lat (msec) : 20=17.54%, 50=82.46% 00:37:17.814 cpu : usr=98.27%, sys=1.10%, ctx=132, majf=0, minf=37 00:37:17.814 IO depths : 1=1.9%, 2=3.7%, 4=10.9%, 8=71.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:37:17.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 complete : 0=0.0%, 4=90.5%, 8=5.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 issued rwts: total=6751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.814 filename2: (groupid=0, jobs=1): err= 0: pid=2296533: Tue Nov 19 18:36:17 2024 00:37:17.814 read: IOPS=687, BW=2751KiB/s (2817kB/s)(26.9MiB/10021msec) 00:37:17.814 slat (usec): min=5, max=127, avg=16.90, stdev=14.48 00:37:17.814 clat (usec): min=8601, max=37986, avg=23121.58, stdev=2552.53 00:37:17.814 lat (usec): min=8635, max=37992, avg=23138.47, stdev=2553.35 00:37:17.814 clat percentiles (usec): 00:37:17.814 | 1.00th=[12911], 5.00th=[16188], 10.00th=[22414], 20.00th=[23200], 00:37:17.814 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.814 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.814 | 99.00th=[28443], 99.50th=[28967], 99.90th=[33162], 99.95th=[35390], 00:37:17.814 | 99.99th=[38011] 00:37:17.814 bw ( KiB/s): min= 2560, max= 3344, per=4.24%, avg=2750.10, stdev=185.10, samples=20 00:37:17.814 iops : min= 640, max= 836, avg=687.50, stdev=46.28, samples=20 00:37:17.814 lat (msec) : 10=0.55%, 20=8.11%, 50=91.34% 00:37:17.814 cpu : usr=99.05%, sys=0.65%, ctx=35, majf=0, minf=54 00:37:17.814 IO depths : 1=5.2%, 2=11.0%, 4=23.6%, 8=52.9%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:17.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 issued rwts: total=6892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.814 filename2: (groupid=0, jobs=1): err= 0: pid=2296534: Tue Nov 19 18:36:17 2024 00:37:17.814 read: IOPS=676, BW=2705KiB/s (2770kB/s)(26.4MiB/10010msec) 00:37:17.814 slat (nsec): min=5517, max=93507, avg=15721.94, stdev=11555.14 00:37:17.814 clat (usec): min=11756, max=37794, avg=23531.03, stdev=2119.15 00:37:17.814 lat (usec): min=11762, max=37808, avg=23546.75, stdev=2120.06 00:37:17.814 clat percentiles (usec): 00:37:17.814 | 1.00th=[14877], 5.00th=[19792], 10.00th=[22938], 20.00th=[23200], 00:37:17.814 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.814 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:17.814 | 99.00th=[30016], 99.50th=[32900], 99.90th=[37487], 99.95th=[37487], 00:37:17.814 | 99.99th=[38011] 00:37:17.814 bw ( KiB/s): min= 2560, max= 2880, per=4.16%, avg=2698.63, stdev=76.25, samples=19 00:37:17.814 iops : min= 640, max= 720, avg=674.63, stdev=19.07, samples=19 00:37:17.814 lat (msec) : 20=5.08%, 50=94.92% 00:37:17.814 cpu : usr=98.96%, sys=0.72%, ctx=80, majf=0, minf=38 00:37:17.814 IO depths : 1=5.4%, 2=11.0%, 4=23.0%, 8=53.4%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:17.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 issued rwts: total=6770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.814 filename2: (groupid=0, jobs=1): err= 0: pid=2296535: Tue Nov 19 18:36:17 2024 00:37:17.814 read: IOPS=674, BW=2697KiB/s (2761kB/s)(26.3MiB/10004msec) 00:37:17.814 slat (nsec): min=5560, max=92154, avg=22863.38, stdev=17793.61 00:37:17.814 clat (usec): min=9438, max=41340, avg=23522.59, stdev=2053.65 00:37:17.814 lat (usec): min=9444, max=41357, avg=23545.45, stdev=2053.96 00:37:17.814 clat percentiles (usec): 00:37:17.814 | 1.00th=[15270], 5.00th=[21890], 10.00th=[22938], 20.00th=[23200], 00:37:17.814 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:37:17.814 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:37:17.814 | 99.00th=[28705], 99.50th=[31327], 99.90th=[41157], 99.95th=[41157], 00:37:17.814 | 99.99th=[41157] 00:37:17.814 bw ( KiB/s): min= 2560, max= 2912, per=4.15%, avg=2691.63, stdev=69.96, samples=19 00:37:17.814 iops : min= 640, max= 728, avg=672.89, stdev=17.51, samples=19 00:37:17.814 lat (msec) : 10=0.27%, 20=3.81%, 50=95.92% 00:37:17.814 cpu : usr=99.03%, sys=0.68%, ctx=16, majf=0, minf=41 00:37:17.814 IO depths : 1=4.9%, 2=10.4%, 4=22.1%, 8=54.4%, 16=8.1%, 32=0.0%, >=64=0.0% 00:37:17.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 complete : 0=0.0%, 4=93.5%, 8=1.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 issued rwts: total=6744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.814 filename2: (groupid=0, jobs=1): err= 0: pid=2296536: Tue Nov 19 18:36:17 2024 00:37:17.814 read: IOPS=675, BW=2701KiB/s (2766kB/s)(26.4MiB/10004msec) 00:37:17.814 slat (nsec): min=5569, max=85244, avg=14438.91, stdev=12515.91 00:37:17.814 clat (usec): min=5594, max=40764, avg=23633.02, stdev=2877.16 00:37:17.814 lat (usec): min=5600, max=40783, avg=23647.46, stdev=2877.79 00:37:17.814 clat percentiles (usec): 00:37:17.814 | 1.00th=[14484], 5.00th=[18220], 10.00th=[21103], 20.00th=[23200], 00:37:17.814 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:17.814 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[27657], 00:37:17.814 | 99.00th=[33817], 99.50th=[36963], 99.90th=[40633], 99.95th=[40633], 00:37:17.814 | 99.99th=[40633] 00:37:17.814 bw ( KiB/s): min= 2484, max= 2848, per=4.14%, avg=2688.21, stdev=74.62, samples=19 00:37:17.814 iops : min= 621, max= 712, avg=672.05, stdev=18.66, samples=19 00:37:17.814 lat (msec) : 10=0.15%, 20=7.87%, 50=91.98% 00:37:17.814 cpu : usr=98.90%, sys=0.77%, ctx=56, majf=0, minf=40 00:37:17.814 IO depths : 1=0.1%, 2=0.4%, 4=2.4%, 8=80.1%, 16=17.1%, 32=0.0%, >=64=0.0% 00:37:17.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 complete : 0=0.0%, 4=89.4%, 8=9.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.814 issued rwts: total=6756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:17.814 00:37:17.814 Run status group 0 (all jobs): 00:37:17.814 READ: bw=63.4MiB/s (66.5MB/s), 2654KiB/s-2817KiB/s (2718kB/s-2884kB/s), io=635MiB (666MB), run=10002-10022msec 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.814 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.815 bdev_null0 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.815 [2024-11-19 18:36:18.211132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.815 bdev_null1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:17.815 { 00:37:17.815 "params": { 00:37:17.815 "name": "Nvme$subsystem", 00:37:17.815 "trtype": "$TEST_TRANSPORT", 00:37:17.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:17.815 "adrfam": "ipv4", 00:37:17.815 "trsvcid": "$NVMF_PORT", 00:37:17.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:17.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:17.815 "hdgst": ${hdgst:-false}, 00:37:17.815 "ddgst": ${ddgst:-false} 00:37:17.815 }, 00:37:17.815 "method": "bdev_nvme_attach_controller" 00:37:17.815 } 00:37:17.815 EOF 00:37:17.815 )") 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:17.815 { 00:37:17.815 "params": { 00:37:17.815 "name": "Nvme$subsystem", 00:37:17.815 "trtype": "$TEST_TRANSPORT", 00:37:17.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:17.815 "adrfam": "ipv4", 00:37:17.815 "trsvcid": "$NVMF_PORT", 00:37:17.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:17.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:17.815 "hdgst": ${hdgst:-false}, 00:37:17.815 "ddgst": ${ddgst:-false} 00:37:17.815 }, 00:37:17.815 "method": "bdev_nvme_attach_controller" 00:37:17.815 } 00:37:17.815 EOF 00:37:17.815 )") 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:17.815 "params": { 00:37:17.815 "name": "Nvme0", 00:37:17.815 "trtype": "tcp", 00:37:17.815 "traddr": "10.0.0.2", 00:37:17.815 "adrfam": "ipv4", 00:37:17.815 "trsvcid": "4420", 00:37:17.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.815 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:17.815 "hdgst": false, 00:37:17.815 "ddgst": false 00:37:17.815 }, 00:37:17.815 "method": "bdev_nvme_attach_controller" 00:37:17.815 },{ 00:37:17.815 "params": { 00:37:17.815 "name": "Nvme1", 00:37:17.815 "trtype": "tcp", 00:37:17.815 "traddr": "10.0.0.2", 00:37:17.815 "adrfam": "ipv4", 00:37:17.815 "trsvcid": "4420", 00:37:17.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:17.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:17.815 "hdgst": false, 00:37:17.815 "ddgst": false 00:37:17.815 }, 00:37:17.815 "method": "bdev_nvme_attach_controller" 00:37:17.815 }' 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:17.815 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:17.816 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:17.816 18:36:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.816 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:17.816 ... 00:37:17.816 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:17.816 ... 00:37:17.816 fio-3.35 00:37:17.816 Starting 4 threads 00:37:23.102 00:37:23.102 filename0: (groupid=0, jobs=1): err= 0: pid=2299192: Tue Nov 19 18:36:24 2024 00:37:23.102 read: IOPS=2979, BW=23.3MiB/s (24.4MB/s)(116MiB/5002msec) 00:37:23.102 slat (nsec): min=5395, max=56200, avg=7924.13, stdev=3300.61 00:37:23.102 clat (usec): min=989, max=4308, avg=2665.23, stdev=194.45 00:37:23.102 lat (usec): min=1007, max=4317, avg=2673.16, stdev=194.23 00:37:23.102 clat percentiles (usec): 00:37:23.102 | 1.00th=[ 1975], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2638], 00:37:23.102 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:23.102 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2900], 00:37:23.102 | 99.00th=[ 3326], 99.50th=[ 3621], 99.90th=[ 3982], 99.95th=[ 4015], 00:37:23.102 | 99.99th=[ 4293] 00:37:23.102 bw ( KiB/s): min=23712, max=24000, per=25.16%, avg=23847.11, stdev=101.86, samples=9 00:37:23.102 iops : min= 2964, max= 3000, avg=2980.89, stdev=12.73, samples=9 00:37:23.102 lat (usec) : 1000=0.01% 00:37:23.102 lat (msec) : 2=1.13%, 4=98.81%, 10=0.05% 00:37:23.102 cpu : usr=95.76%, sys=3.96%, ctx=8, majf=0, minf=110 00:37:23.102 IO depths : 1=0.1%, 2=0.1%, 4=67.9%, 8=31.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.102 complete : 0=0.0%, 4=95.8%, 8=4.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.102 issued rwts: total=14904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.102 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:23.102 filename0: (groupid=0, jobs=1): err= 0: pid=2299193: Tue Nov 19 18:36:24 2024 00:37:23.102 read: IOPS=2957, BW=23.1MiB/s (24.2MB/s)(116MiB/5001msec) 00:37:23.102 slat (usec): min=5, max=106, avg= 7.82, stdev= 3.70 00:37:23.102 clat (usec): min=1087, max=4643, avg=2683.03, stdev=206.97 00:37:23.102 lat (usec): min=1093, max=4669, avg=2690.85, stdev=207.21 00:37:23.102 clat percentiles (usec): 00:37:23.102 | 1.00th=[ 2057], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2638], 00:37:23.102 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:23.102 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2933], 00:37:23.102 | 99.00th=[ 3621], 99.50th=[ 3884], 99.90th=[ 4424], 99.95th=[ 4555], 00:37:23.102 | 99.99th=[ 4621] 00:37:23.102 bw ( KiB/s): min=23504, max=23728, per=24.95%, avg=23651.44, stdev=67.78, samples=9 00:37:23.102 iops : min= 2938, max= 2966, avg=2956.33, stdev= 8.49, samples=9 00:37:23.102 lat (msec) : 2=0.68%, 4=98.97%, 10=0.36% 00:37:23.102 cpu : usr=96.20%, sys=3.52%, ctx=8, majf=0, minf=63 00:37:23.102 IO depths : 1=0.1%, 2=0.1%, 4=73.3%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.102 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.102 issued rwts: total=14792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.102 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:23.102 filename1: (groupid=0, jobs=1): err= 0: pid=2299194: Tue Nov 19 18:36:24 2024 00:37:23.102 read: IOPS=2951, BW=23.1MiB/s (24.2MB/s)(115MiB/5001msec) 00:37:23.102 slat (nsec): min=5389, max=87961, avg=7698.63, stdev=3950.53 00:37:23.102 clat (usec): min=1202, max=4554, avg=2690.24, stdev=188.34 00:37:23.102 lat (usec): min=1207, max=4560, avg=2697.94, stdev=188.67 00:37:23.102 clat percentiles (usec): 00:37:23.102 | 1.00th=[ 2114], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2638], 00:37:23.102 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:23.102 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2933], 00:37:23.102 | 99.00th=[ 3458], 99.50th=[ 3687], 99.90th=[ 4293], 99.95th=[ 4359], 00:37:23.102 | 99.99th=[ 4555] 00:37:23.102 bw ( KiB/s): min=23294, max=23744, per=24.89%, avg=23589.11, stdev=136.77, samples=9 00:37:23.102 iops : min= 2911, max= 2968, avg=2948.56, stdev=17.30, samples=9 00:37:23.102 lat (msec) : 2=0.54%, 4=99.23%, 10=0.23% 00:37:23.102 cpu : usr=96.92%, sys=2.82%, ctx=7, majf=0, minf=76 00:37:23.102 IO depths : 1=0.1%, 2=0.2%, 4=71.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.102 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.102 issued rwts: total=14760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.102 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:23.102 filename1: (groupid=0, jobs=1): err= 0: pid=2299195: Tue Nov 19 18:36:24 2024 00:37:23.102 read: IOPS=2961, BW=23.1MiB/s (24.3MB/s)(116MiB/5001msec) 00:37:23.102 slat (nsec): min=5396, max=90883, avg=9033.40, stdev=3838.01 00:37:23.102 clat (usec): min=1369, max=4909, avg=2678.11, stdev=189.89 00:37:23.102 lat (usec): min=1377, max=4937, avg=2687.14, stdev=190.15 00:37:23.102 clat percentiles (usec): 00:37:23.103 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2638], 00:37:23.103 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:23.103 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2900], 00:37:23.103 | 99.00th=[ 3425], 99.50th=[ 3687], 99.90th=[ 4490], 99.95th=[ 4686], 00:37:23.103 | 99.99th=[ 4883] 00:37:23.103 bw ( KiB/s): min=23424, max=23840, per=24.98%, avg=23680.00, stdev=121.33, samples=9 00:37:23.103 iops : min= 2928, max= 2980, avg=2960.00, stdev=15.17, samples=9 00:37:23.103 lat (msec) : 2=0.57%, 4=99.22%, 10=0.21% 00:37:23.103 cpu : usr=96.28%, sys=3.44%, ctx=14, majf=0, minf=88 00:37:23.103 IO depths : 1=0.1%, 2=0.1%, 4=70.3%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:23.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.103 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:23.103 issued rwts: total=14811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:23.103 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:23.103 00:37:23.103 Run status group 0 (all jobs): 00:37:23.103 READ: bw=92.6MiB/s (97.1MB/s), 23.1MiB/s-23.3MiB/s (24.2MB/s-24.4MB/s), io=463MiB (486MB), run=5001-5002msec 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.103 00:37:23.103 real 0m24.489s 00:37:23.103 user 5m17.496s 00:37:23.103 sys 0m4.392s 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.103 18:36:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:23.103 ************************************ 00:37:23.103 END TEST fio_dif_rand_params 00:37:23.103 ************************************ 00:37:23.103 18:36:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:23.103 18:36:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.103 18:36:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.103 18:36:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:23.103 ************************************ 00:37:23.103 START TEST fio_dif_digest 00:37:23.103 ************************************ 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.103 bdev_null0 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.103 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.364 [2024-11-19 18:36:24.598267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:23.364 { 00:37:23.364 "params": { 00:37:23.364 "name": "Nvme$subsystem", 00:37:23.364 "trtype": "$TEST_TRANSPORT", 00:37:23.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:23.364 "adrfam": "ipv4", 00:37:23.364 "trsvcid": "$NVMF_PORT", 00:37:23.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:23.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:23.364 "hdgst": ${hdgst:-false}, 00:37:23.364 "ddgst": ${ddgst:-false} 00:37:23.364 }, 00:37:23.364 "method": "bdev_nvme_attach_controller" 00:37:23.364 } 00:37:23.364 EOF 00:37:23.364 )") 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:23.364 "params": { 00:37:23.364 "name": "Nvme0", 00:37:23.364 "trtype": "tcp", 00:37:23.364 "traddr": "10.0.0.2", 00:37:23.364 "adrfam": "ipv4", 00:37:23.364 "trsvcid": "4420", 00:37:23.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.364 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.364 "hdgst": true, 00:37:23.364 "ddgst": true 00:37:23.364 }, 00:37:23.364 "method": "bdev_nvme_attach_controller" 00:37:23.364 }' 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:23.364 18:36:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:23.625 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:23.625 ... 00:37:23.625 fio-3.35 00:37:23.625 Starting 3 threads 00:37:35.866 00:37:35.866 filename0: (groupid=0, jobs=1): err= 0: pid=2300696: Tue Nov 19 18:36:35 2024 00:37:35.866 read: IOPS=307, BW=38.5MiB/s (40.4MB/s)(387MiB/10043msec) 00:37:35.866 slat (nsec): min=5792, max=34681, avg=8164.94, stdev=1777.79 00:37:35.866 clat (usec): min=5970, max=51614, avg=9718.38, stdev=1299.70 00:37:35.866 lat (usec): min=5978, max=51620, avg=9726.54, stdev=1299.57 00:37:35.866 clat percentiles (usec): 00:37:35.866 | 1.00th=[ 7308], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:37:35.866 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:37:35.866 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:37:35.866 | 99.00th=[11469], 99.50th=[11731], 99.90th=[12256], 99.95th=[49546], 00:37:35.866 | 99.99th=[51643] 00:37:35.866 bw ( KiB/s): min=38400, max=41472, per=34.32%, avg=39564.80, stdev=740.45, samples=20 00:37:35.866 iops : min= 300, max= 324, avg=309.10, stdev= 5.78, samples=20 00:37:35.866 lat (msec) : 10=66.38%, 20=33.56%, 50=0.03%, 100=0.03% 00:37:35.866 cpu : usr=94.51%, sys=5.26%, ctx=15, majf=0, minf=153 00:37:35.866 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:35.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.866 issued rwts: total=3093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.866 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:35.866 filename0: (groupid=0, jobs=1): err= 0: pid=2300697: Tue Nov 19 18:36:35 2024 00:37:35.866 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(378MiB/10045msec) 00:37:35.866 slat (nsec): min=5801, max=31749, avg=6623.35, stdev=1078.10 00:37:35.866 clat (usec): min=6767, max=47100, avg=9935.90, stdev=1199.45 00:37:35.866 lat (usec): min=6773, max=47106, avg=9942.53, stdev=1199.48 00:37:35.866 clat percentiles (usec): 00:37:35.866 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:37:35.866 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:37:35.866 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:37:35.866 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12649], 99.95th=[44827], 00:37:35.866 | 99.99th=[46924] 00:37:35.866 bw ( KiB/s): min=37376, max=40448, per=33.58%, avg=38707.20, stdev=731.66, samples=20 00:37:35.866 iops : min= 292, max= 316, avg=302.40, stdev= 5.72, samples=20 00:37:35.866 lat (msec) : 10=55.19%, 20=44.75%, 50=0.07% 00:37:35.866 cpu : usr=94.71%, sys=5.06%, ctx=13, majf=0, minf=146 00:37:35.866 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:35.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.866 issued rwts: total=3026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.866 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:35.866 filename0: (groupid=0, jobs=1): err= 0: pid=2300698: Tue Nov 19 18:36:35 2024 00:37:35.866 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(366MiB/10045msec) 00:37:35.866 slat (nsec): min=6056, max=31074, avg=9024.53, stdev=1102.40 00:37:35.866 clat (usec): min=7126, max=94091, avg=10269.96, stdev=3094.68 00:37:35.866 lat (usec): min=7134, max=94100, avg=10278.98, stdev=3094.67 00:37:35.866 clat percentiles (usec): 00:37:35.866 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:37:35.866 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:37:35.866 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:37:35.866 | 99.00th=[12125], 99.50th=[12649], 99.90th=[53216], 99.95th=[93848], 00:37:35.866 | 99.99th=[93848] 00:37:35.866 bw ( KiB/s): min=31232, max=39680, per=32.48%, avg=37440.00, stdev=1855.13, samples=20 00:37:35.866 iops : min= 244, max= 310, avg=292.50, stdev=14.49, samples=20 00:37:35.866 lat (msec) : 10=45.54%, 20=54.15%, 50=0.03%, 100=0.27% 00:37:35.866 cpu : usr=94.92%, sys=4.84%, ctx=16, majf=0, minf=101 00:37:35.866 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:35.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.866 issued rwts: total=2927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.866 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:35.866 00:37:35.866 Run status group 0 (all jobs): 00:37:35.866 READ: bw=113MiB/s (118MB/s), 36.4MiB/s-38.5MiB/s (38.2MB/s-40.4MB/s), io=1131MiB (1186MB), run=10043-10045msec 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.866 00:37:35.866 real 0m11.311s 00:37:35.866 user 0m43.885s 00:37:35.866 sys 0m1.825s 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.866 18:36:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:35.866 ************************************ 00:37:35.866 END TEST fio_dif_digest 00:37:35.866 ************************************ 00:37:35.866 18:36:35 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:35.866 18:36:35 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:35.866 rmmod nvme_tcp 00:37:35.866 rmmod nvme_fabrics 00:37:35.866 rmmod nvme_keyring 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2289765 ']' 00:37:35.866 18:36:35 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2289765 00:37:35.866 18:36:35 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2289765 ']' 00:37:35.866 18:36:35 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2289765 00:37:35.866 18:36:35 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:35.866 18:36:35 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:35.866 18:36:35 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2289765 00:37:35.866 18:36:36 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:35.866 18:36:36 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:35.866 18:36:36 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2289765' 00:37:35.866 killing process with pid 2289765 00:37:35.866 18:36:36 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2289765 00:37:35.866 18:36:36 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2289765 00:37:35.866 18:36:36 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:35.866 18:36:36 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:38.414 Waiting for block devices as requested 00:37:38.414 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:38.414 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:38.414 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:38.414 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:38.414 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:38.675 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:38.675 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:38.675 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:38.935 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:38.935 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:39.195 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:39.195 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:39.195 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:39.455 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:39.455 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:39.455 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:39.716 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:39.976 18:36:41 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:39.976 18:36:41 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:39.976 18:36:41 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:39.976 18:36:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:39.976 18:36:41 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:39.976 18:36:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:39.976 18:36:41 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:39.976 18:36:41 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:39.976 18:36:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.976 18:36:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:39.976 18:36:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.943 18:36:43 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:41.943 00:37:41.943 real 1m18.764s 00:37:41.943 user 7m59.414s 00:37:41.943 sys 0m22.462s 00:37:41.943 18:36:43 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:41.943 18:36:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:41.943 ************************************ 00:37:41.943 END TEST nvmf_dif 00:37:41.943 ************************************ 00:37:42.261 18:36:43 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:42.261 18:36:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:42.261 18:36:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:42.261 18:36:43 -- common/autotest_common.sh@10 -- # set +x 00:37:42.261 ************************************ 00:37:42.261 START TEST nvmf_abort_qd_sizes 00:37:42.261 ************************************ 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:42.261 * Looking for test storage... 00:37:42.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:42.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.261 --rc genhtml_branch_coverage=1 00:37:42.261 --rc genhtml_function_coverage=1 00:37:42.261 --rc genhtml_legend=1 00:37:42.261 --rc geninfo_all_blocks=1 00:37:42.261 --rc geninfo_unexecuted_blocks=1 00:37:42.261 00:37:42.261 ' 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:42.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.261 --rc genhtml_branch_coverage=1 00:37:42.261 --rc genhtml_function_coverage=1 00:37:42.261 --rc genhtml_legend=1 00:37:42.261 --rc geninfo_all_blocks=1 00:37:42.261 --rc geninfo_unexecuted_blocks=1 00:37:42.261 00:37:42.261 ' 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:42.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.261 --rc genhtml_branch_coverage=1 00:37:42.261 --rc genhtml_function_coverage=1 00:37:42.261 --rc genhtml_legend=1 00:37:42.261 --rc geninfo_all_blocks=1 00:37:42.261 --rc geninfo_unexecuted_blocks=1 00:37:42.261 00:37:42.261 ' 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:42.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:42.261 --rc genhtml_branch_coverage=1 00:37:42.261 --rc genhtml_function_coverage=1 00:37:42.261 --rc genhtml_legend=1 00:37:42.261 --rc geninfo_all_blocks=1 00:37:42.261 --rc geninfo_unexecuted_blocks=1 00:37:42.261 00:37:42.261 ' 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:42.261 18:36:43 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:42.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:42.262 18:36:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:50.451 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:50.452 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:50.452 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:50.452 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:50.452 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:50.452 18:36:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:50.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:50.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:37:50.452 00:37:50.452 --- 10.0.0.2 ping statistics --- 00:37:50.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.452 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:50.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:50.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:37:50.452 00:37:50.452 --- 10.0.0.1 ping statistics --- 00:37:50.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.452 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:50.452 18:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:53.759 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:53.759 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2310134 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2310134 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2310134 ']' 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.021 18:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:54.021 [2024-11-19 18:36:55.362567] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:37:54.021 [2024-11-19 18:36:55.362627] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:54.021 [2024-11-19 18:36:55.464479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:54.283 [2024-11-19 18:36:55.518452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:54.283 [2024-11-19 18:36:55.518506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:54.283 [2024-11-19 18:36:55.518515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:54.283 [2024-11-19 18:36:55.518522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:54.283 [2024-11-19 18:36:55.518528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:54.283 [2024-11-19 18:36:55.520908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:54.283 [2024-11-19 18:36:55.521069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:54.283 [2024-11-19 18:36:55.521232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:54.283 [2024-11-19 18:36:55.521233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:54.856 18:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:54.856 ************************************ 00:37:54.856 START TEST spdk_target_abort 00:37:54.856 ************************************ 00:37:54.856 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:54.856 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:54.856 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:54.856 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.856 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.117 spdk_targetn1 00:37:55.117 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.117 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:55.117 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.117 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.117 [2024-11-19 18:36:56.584041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.378 [2024-11-19 18:36:56.632347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:55.378 18:36:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:55.639 [2024-11-19 18:36:56.910745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:248 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:55.639 [2024-11-19 18:36:56.910782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0021 p:1 m:0 dnr:0 00:37:55.639 [2024-11-19 18:36:56.917652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:496 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:55.639 [2024-11-19 18:36:56.917674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:003f p:1 m:0 dnr:0 00:37:55.639 [2024-11-19 18:36:56.918020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:504 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:55.639 [2024-11-19 18:36:56.918034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0040 p:1 m:0 dnr:0 00:37:55.639 [2024-11-19 18:36:56.957772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1784 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:55.639 [2024-11-19 18:36:56.957795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00e0 p:1 m:0 dnr:0 00:37:55.639 [2024-11-19 18:36:56.967889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2112 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:55.639 [2024-11-19 18:36:56.967909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:55.639 [2024-11-19 18:36:56.981715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2520 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:55.639 [2024-11-19 18:36:56.981736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:55.639 [2024-11-19 18:36:56.989650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2792 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:55.639 [2024-11-19 18:36:56.989674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:58.944 Initializing NVMe Controllers 00:37:58.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:58.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:58.944 Initialization complete. Launching workers. 00:37:58.944 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11550, failed: 7 00:37:58.944 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2349, failed to submit 9208 00:37:58.944 success 754, unsuccessful 1595, failed 0 00:37:58.944 18:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:58.944 18:36:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:58.944 [2024-11-19 18:37:00.162320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:1208 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:37:58.944 [2024-11-19 18:37:00.162371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:37:58.944 [2024-11-19 18:37:00.170176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:1368 len:8 PRP1 0x200004e56000 PRP2 0x0 00:37:58.944 [2024-11-19 18:37:00.170198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:00ac p:1 m:0 dnr:0 00:37:58.944 [2024-11-19 18:37:00.201294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:2152 len:8 PRP1 0x200004e48000 PRP2 0x0 00:37:58.944 [2024-11-19 18:37:00.201320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:58.944 [2024-11-19 18:37:00.209166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:2344 len:8 PRP1 0x200004e40000 PRP2 0x0 00:37:58.944 [2024-11-19 18:37:00.209189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:58.944 [2024-11-19 18:37:00.217236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:2488 len:8 PRP1 0x200004e48000 PRP2 0x0 00:37:58.944 [2024-11-19 18:37:00.217257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:58.944 [2024-11-19 18:37:00.233263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:2848 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:37:58.944 [2024-11-19 18:37:00.233285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:58.944 [2024-11-19 18:37:00.272823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:3712 len:8 PRP1 0x200004e48000 PRP2 0x0 00:37:58.944 [2024-11-19 18:37:00.272845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00dd p:0 m:0 dnr:0 00:37:58.944 [2024-11-19 18:37:00.288327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:4072 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:37:58.944 [2024-11-19 18:37:00.288351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0004 p:1 m:0 dnr:0 00:37:59.887 [2024-11-19 18:37:01.211230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:25400 len:8 PRP1 0x200004e62000 PRP2 0x0 00:37:59.887 [2024-11-19 18:37:01.211262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0068 p:1 m:0 dnr:0 00:38:00.457 [2024-11-19 18:37:01.617184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:35080 len:8 PRP1 0x200004e60000 PRP2 0x0 00:38:00.458 [2024-11-19 18:37:01.617219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:01.863 Initializing NVMe Controllers 00:38:01.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:01.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:01.863 Initialization complete. Launching workers. 00:38:01.863 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8617, failed: 10 00:38:01.863 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7399 00:38:01.863 success 345, unsuccessful 883, failed 0 00:38:01.863 18:37:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:01.863 18:37:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:05.159 Initializing NVMe Controllers 00:38:05.159 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:05.159 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:05.159 Initialization complete. Launching workers. 00:38:05.159 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43746, failed: 0 00:38:05.159 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2725, failed to submit 41021 00:38:05.159 success 588, unsuccessful 2137, failed 0 00:38:05.159 18:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:05.159 18:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.159 18:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.159 18:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.159 18:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:05.159 18:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.159 18:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2310134 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2310134 ']' 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2310134 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310134 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310134' 00:38:07.073 killing process with pid 2310134 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2310134 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2310134 00:38:07.073 00:38:07.073 real 0m12.246s 00:38:07.073 user 0m49.822s 00:38:07.073 sys 0m2.077s 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.073 18:37:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:07.073 ************************************ 00:38:07.073 END TEST spdk_target_abort 00:38:07.073 ************************************ 00:38:07.335 18:37:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:07.335 18:37:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:07.335 18:37:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:07.335 18:37:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:07.335 ************************************ 00:38:07.335 START TEST kernel_target_abort 00:38:07.335 ************************************ 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:07.335 18:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:10.640 Waiting for block devices as requested 00:38:10.640 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:10.640 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:10.640 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:10.903 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:10.903 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:10.903 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:11.165 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:11.165 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:11.165 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:11.427 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:11.427 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:11.689 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:11.689 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:11.689 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:11.949 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:11.949 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:11.949 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:12.209 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:12.209 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:12.209 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:12.209 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:12.209 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:12.209 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:12.209 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:12.209 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:12.209 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:12.469 No valid GPT data, bailing 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:12.470 00:38:12.470 Discovery Log Number of Records 2, Generation counter 2 00:38:12.470 =====Discovery Log Entry 0====== 00:38:12.470 trtype: tcp 00:38:12.470 adrfam: ipv4 00:38:12.470 subtype: current discovery subsystem 00:38:12.470 treq: not specified, sq flow control disable supported 00:38:12.470 portid: 1 00:38:12.470 trsvcid: 4420 00:38:12.470 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:12.470 traddr: 10.0.0.1 00:38:12.470 eflags: none 00:38:12.470 sectype: none 00:38:12.470 =====Discovery Log Entry 1====== 00:38:12.470 trtype: tcp 00:38:12.470 adrfam: ipv4 00:38:12.470 subtype: nvme subsystem 00:38:12.470 treq: not specified, sq flow control disable supported 00:38:12.470 portid: 1 00:38:12.470 trsvcid: 4420 00:38:12.470 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:12.470 traddr: 10.0.0.1 00:38:12.470 eflags: none 00:38:12.470 sectype: none 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:12.470 18:37:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:15.770 Initializing NVMe Controllers 00:38:15.770 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:15.770 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:15.770 Initialization complete. Launching workers. 00:38:15.770 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68252, failed: 0 00:38:15.770 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68252, failed to submit 0 00:38:15.770 success 0, unsuccessful 68252, failed 0 00:38:15.770 18:37:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:15.770 18:37:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:19.071 Initializing NVMe Controllers 00:38:19.071 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:19.071 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:19.071 Initialization complete. Launching workers. 00:38:19.071 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119468, failed: 0 00:38:19.071 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30054, failed to submit 89414 00:38:19.071 success 0, unsuccessful 30054, failed 0 00:38:19.071 18:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:19.071 18:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:22.371 Initializing NVMe Controllers 00:38:22.371 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:22.371 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:22.371 Initialization complete. Launching workers. 00:38:22.371 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145958, failed: 0 00:38:22.371 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36510, failed to submit 109448 00:38:22.371 success 0, unsuccessful 36510, failed 0 00:38:22.371 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:22.371 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:22.371 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:22.371 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:22.371 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:22.372 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:22.372 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:22.372 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:22.372 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:22.372 18:37:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:25.675 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:25.675 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:27.589 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:27.589 00:38:27.589 real 0m20.343s 00:38:27.589 user 0m9.998s 00:38:27.589 sys 0m6.053s 00:38:27.589 18:37:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.589 18:37:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:27.589 ************************************ 00:38:27.589 END TEST kernel_target_abort 00:38:27.589 ************************************ 00:38:27.589 18:37:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:27.589 18:37:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:27.589 18:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:27.589 18:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:27.589 18:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:27.589 18:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:27.589 18:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:27.589 18:37:28 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:27.589 rmmod nvme_tcp 00:38:27.589 rmmod nvme_fabrics 00:38:27.589 rmmod nvme_keyring 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2310134 ']' 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2310134 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2310134 ']' 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2310134 00:38:27.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2310134) - No such process 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2310134 is not found' 00:38:27.850 Process with pid 2310134 is not found 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:27.850 18:37:29 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:31.150 Waiting for block devices as requested 00:38:31.150 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:31.150 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:31.411 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:31.411 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:31.411 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:31.672 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:31.672 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:31.672 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:31.933 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:31.933 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:32.193 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:32.193 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:32.193 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:32.454 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:32.454 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:32.454 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:32.715 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:32.976 18:37:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:34.889 18:37:36 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:34.889 00:38:34.889 real 0m52.872s 00:38:34.889 user 1m5.365s 00:38:34.889 sys 0m19.495s 00:38:34.889 18:37:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:34.889 18:37:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:34.889 ************************************ 00:38:34.889 END TEST nvmf_abort_qd_sizes 00:38:34.889 ************************************ 00:38:35.149 18:37:36 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:35.149 18:37:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:35.149 18:37:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:35.149 18:37:36 -- common/autotest_common.sh@10 -- # set +x 00:38:35.149 ************************************ 00:38:35.149 START TEST keyring_file 00:38:35.150 ************************************ 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:35.150 * Looking for test storage... 00:38:35.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:35.150 18:37:36 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:35.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.150 --rc genhtml_branch_coverage=1 00:38:35.150 --rc genhtml_function_coverage=1 00:38:35.150 --rc genhtml_legend=1 00:38:35.150 --rc geninfo_all_blocks=1 00:38:35.150 --rc geninfo_unexecuted_blocks=1 00:38:35.150 00:38:35.150 ' 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:35.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.150 --rc genhtml_branch_coverage=1 00:38:35.150 --rc genhtml_function_coverage=1 00:38:35.150 --rc genhtml_legend=1 00:38:35.150 --rc geninfo_all_blocks=1 00:38:35.150 --rc geninfo_unexecuted_blocks=1 00:38:35.150 00:38:35.150 ' 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:35.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.150 --rc genhtml_branch_coverage=1 00:38:35.150 --rc genhtml_function_coverage=1 00:38:35.150 --rc genhtml_legend=1 00:38:35.150 --rc geninfo_all_blocks=1 00:38:35.150 --rc geninfo_unexecuted_blocks=1 00:38:35.150 00:38:35.150 ' 00:38:35.150 18:37:36 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:35.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.150 --rc genhtml_branch_coverage=1 00:38:35.150 --rc genhtml_function_coverage=1 00:38:35.150 --rc genhtml_legend=1 00:38:35.150 --rc geninfo_all_blocks=1 00:38:35.150 --rc geninfo_unexecuted_blocks=1 00:38:35.150 00:38:35.150 ' 00:38:35.150 18:37:36 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:35.150 18:37:36 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:35.150 18:37:36 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:35.411 18:37:36 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:35.411 18:37:36 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:35.411 18:37:36 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:35.411 18:37:36 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:35.411 18:37:36 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:35.411 18:37:36 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:35.411 18:37:36 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:35.411 18:37:36 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:35.411 18:37:36 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:35.411 18:37:36 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:35.411 18:37:36 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:35.411 18:37:36 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:35.411 18:37:36 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.411 18:37:36 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.412 18:37:36 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.412 18:37:36 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:35.412 18:37:36 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:35.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.v4vBdpquYX 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.v4vBdpquYX 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.v4vBdpquYX 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.v4vBdpquYX 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zekyV2RqCN 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:35.412 18:37:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zekyV2RqCN 00:38:35.412 18:37:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zekyV2RqCN 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.zekyV2RqCN 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@30 -- # tgtpid=2320293 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2320293 00:38:35.412 18:37:36 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:35.412 18:37:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2320293 ']' 00:38:35.412 18:37:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:35.412 18:37:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:35.412 18:37:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:35.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:35.412 18:37:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:35.412 18:37:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:35.412 [2024-11-19 18:37:36.815840] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:38:35.412 [2024-11-19 18:37:36.815919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320293 ] 00:38:35.673 [2024-11-19 18:37:36.907203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.673 [2024-11-19 18:37:36.960101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:36.244 18:37:37 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:36.244 [2024-11-19 18:37:37.614195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.244 null0 00:38:36.244 [2024-11-19 18:37:37.646248] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:36.244 [2024-11-19 18:37:37.646712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.244 18:37:37 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:36.244 [2024-11-19 18:37:37.678308] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:36.244 request: 00:38:36.244 { 00:38:36.244 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:36.244 "secure_channel": false, 00:38:36.244 "listen_address": { 00:38:36.244 "trtype": "tcp", 00:38:36.244 "traddr": "127.0.0.1", 00:38:36.244 "trsvcid": "4420" 00:38:36.244 }, 00:38:36.244 "method": "nvmf_subsystem_add_listener", 00:38:36.244 "req_id": 1 00:38:36.244 } 00:38:36.244 Got JSON-RPC error response 00:38:36.244 response: 00:38:36.244 { 00:38:36.244 "code": -32602, 00:38:36.244 "message": "Invalid parameters" 00:38:36.244 } 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:36.244 18:37:37 keyring_file -- keyring/file.sh@47 -- # bperfpid=2320381 00:38:36.244 18:37:37 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2320381 /var/tmp/bperf.sock 00:38:36.244 18:37:37 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2320381 ']' 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:36.244 18:37:37 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:36.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:36.245 18:37:37 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:36.245 18:37:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:36.507 [2024-11-19 18:37:37.733953] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:38:36.507 [2024-11-19 18:37:37.734025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320381 ] 00:38:36.507 [2024-11-19 18:37:37.828671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.507 [2024-11-19 18:37:37.881258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.449 18:37:38 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:37.450 18:37:38 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:37.450 18:37:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v4vBdpquYX 00:38:37.450 18:37:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v4vBdpquYX 00:38:37.450 18:37:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zekyV2RqCN 00:38:37.450 18:37:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zekyV2RqCN 00:38:37.450 18:37:38 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:37.450 18:37:38 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:37.450 18:37:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.450 18:37:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:37.450 18:37:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.711 18:37:39 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.v4vBdpquYX == \/\t\m\p\/\t\m\p\.\v\4\v\B\d\p\q\u\Y\X ]] 00:38:37.711 18:37:39 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:37.711 18:37:39 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:37.711 18:37:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.711 18:37:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:37.711 18:37:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.971 18:37:39 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.zekyV2RqCN == \/\t\m\p\/\t\m\p\.\z\e\k\y\V\2\R\q\C\N ]] 00:38:37.971 18:37:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:37.971 18:37:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:37.971 18:37:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:37.971 18:37:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:37.971 18:37:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.971 18:37:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:38.232 18:37:39 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:38.232 18:37:39 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:38.232 18:37:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:38.232 18:37:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.232 18:37:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.232 18:37:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:38.232 18:37:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.232 18:37:39 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:38.232 18:37:39 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:38.232 18:37:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:38.493 [2024-11-19 18:37:39.838906] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:38.493 nvme0n1 00:38:38.493 18:37:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:38.493 18:37:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:38.493 18:37:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.493 18:37:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.493 18:37:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:38.493 18:37:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:38.756 18:37:40 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:38.756 18:37:40 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:38.756 18:37:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:38.756 18:37:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:38.756 18:37:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:38.756 18:37:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:38.756 18:37:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.017 18:37:40 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:39.017 18:37:40 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:39.017 Running I/O for 1 seconds... 00:38:40.404 18562.00 IOPS, 72.51 MiB/s 00:38:40.405 Latency(us) 00:38:40.405 [2024-11-19T17:37:41.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.405 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:40.405 nvme0n1 : 1.00 18620.10 72.73 0.00 0.00 6862.10 3741.01 17476.27 00:38:40.405 [2024-11-19T17:37:41.876Z] =================================================================================================================== 00:38:40.405 [2024-11-19T17:37:41.876Z] Total : 18620.10 72.73 0.00 0.00 6862.10 3741.01 17476.27 00:38:40.405 { 00:38:40.405 "results": [ 00:38:40.405 { 00:38:40.405 "job": "nvme0n1", 00:38:40.405 "core_mask": "0x2", 00:38:40.405 "workload": "randrw", 00:38:40.405 "percentage": 50, 00:38:40.405 "status": "finished", 00:38:40.405 "queue_depth": 128, 00:38:40.405 "io_size": 4096, 00:38:40.405 "runtime": 1.003754, 00:38:40.405 "iops": 18620.1001440592, 00:38:40.405 "mibps": 72.73476618773125, 00:38:40.405 "io_failed": 0, 00:38:40.405 "io_timeout": 0, 00:38:40.405 "avg_latency_us": 6862.103970037454, 00:38:40.405 "min_latency_us": 3741.0133333333333, 00:38:40.405 "max_latency_us": 17476.266666666666 00:38:40.405 } 00:38:40.405 ], 00:38:40.405 "core_count": 1 00:38:40.405 } 00:38:40.405 18:37:41 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:40.405 18:37:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:40.405 18:37:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:40.405 18:37:41 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:40.405 18:37:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:40.750 18:37:42 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:40.750 18:37:42 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:40.750 18:37:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:40.750 18:37:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:40.750 18:37:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:40.750 18:37:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:40.750 18:37:42 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:40.751 18:37:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:40.751 18:37:42 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:40.751 18:37:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:40.751 [2024-11-19 18:37:42.169422] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:40.751 [2024-11-19 18:37:42.170202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b7c10 (107): Transport endpoint is not connected 00:38:40.751 [2024-11-19 18:37:42.171197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b7c10 (9): Bad file descriptor 00:38:40.751 [2024-11-19 18:37:42.172199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:40.751 [2024-11-19 18:37:42.172207] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:40.751 [2024-11-19 18:37:42.172213] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:40.751 [2024-11-19 18:37:42.172220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:40.751 request: 00:38:40.751 { 00:38:40.751 "name": "nvme0", 00:38:40.751 "trtype": "tcp", 00:38:40.751 "traddr": "127.0.0.1", 00:38:40.751 "adrfam": "ipv4", 00:38:40.751 "trsvcid": "4420", 00:38:40.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:40.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:40.751 "prchk_reftag": false, 00:38:40.751 "prchk_guard": false, 00:38:40.751 "hdgst": false, 00:38:40.751 "ddgst": false, 00:38:40.751 "psk": "key1", 00:38:40.751 "allow_unrecognized_csi": false, 00:38:40.751 "method": "bdev_nvme_attach_controller", 00:38:40.751 "req_id": 1 00:38:40.751 } 00:38:40.751 Got JSON-RPC error response 00:38:40.751 response: 00:38:40.751 { 00:38:40.751 "code": -5, 00:38:40.751 "message": "Input/output error" 00:38:40.751 } 00:38:41.053 18:37:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:41.053 18:37:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:41.053 18:37:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:41.053 18:37:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:41.053 18:37:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.053 18:37:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:41.053 18:37:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:41.053 18:37:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.314 18:37:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:41.314 18:37:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:41.314 18:37:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:41.314 18:37:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:41.314 18:37:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:41.574 18:37:42 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:41.574 18:37:42 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:41.575 18:37:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.834 18:37:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:41.834 18:37:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.v4vBdpquYX 00:38:41.834 18:37:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.v4vBdpquYX 00:38:41.834 18:37:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:41.834 18:37:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.v4vBdpquYX 00:38:41.834 18:37:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:41.834 18:37:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.834 18:37:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:41.834 18:37:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.835 18:37:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v4vBdpquYX 00:38:41.835 18:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v4vBdpquYX 00:38:41.835 [2024-11-19 18:37:43.205855] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.v4vBdpquYX': 0100660 00:38:41.835 [2024-11-19 18:37:43.205874] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:41.835 request: 00:38:41.835 { 00:38:41.835 "name": "key0", 00:38:41.835 "path": "/tmp/tmp.v4vBdpquYX", 00:38:41.835 "method": "keyring_file_add_key", 00:38:41.835 "req_id": 1 00:38:41.835 } 00:38:41.835 Got JSON-RPC error response 00:38:41.835 response: 00:38:41.835 { 00:38:41.835 "code": -1, 00:38:41.835 "message": "Operation not permitted" 00:38:41.835 } 00:38:41.835 18:37:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:41.835 18:37:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:41.835 18:37:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:41.835 18:37:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:41.835 18:37:43 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.v4vBdpquYX 00:38:41.835 18:37:43 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v4vBdpquYX 00:38:41.835 18:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v4vBdpquYX 00:38:42.095 18:37:43 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.v4vBdpquYX 00:38:42.095 18:37:43 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:42.095 18:37:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:42.095 18:37:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.095 18:37:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.095 18:37:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.095 18:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.356 18:37:43 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:42.356 18:37:43 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.356 18:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.356 [2024-11-19 18:37:43.771289] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.v4vBdpquYX': No such file or directory 00:38:42.356 [2024-11-19 18:37:43.771303] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:42.356 [2024-11-19 18:37:43.771316] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:42.356 [2024-11-19 18:37:43.771322] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:42.356 [2024-11-19 18:37:43.771328] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:42.356 [2024-11-19 18:37:43.771333] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:42.356 request: 00:38:42.356 { 00:38:42.356 "name": "nvme0", 00:38:42.356 "trtype": "tcp", 00:38:42.356 "traddr": "127.0.0.1", 00:38:42.356 "adrfam": "ipv4", 00:38:42.356 "trsvcid": "4420", 00:38:42.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:42.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:42.356 "prchk_reftag": false, 00:38:42.356 "prchk_guard": false, 00:38:42.356 "hdgst": false, 00:38:42.356 "ddgst": false, 00:38:42.356 "psk": "key0", 00:38:42.356 "allow_unrecognized_csi": false, 00:38:42.356 "method": "bdev_nvme_attach_controller", 00:38:42.356 "req_id": 1 00:38:42.356 } 00:38:42.356 Got JSON-RPC error response 00:38:42.356 response: 00:38:42.356 { 00:38:42.356 "code": -19, 00:38:42.356 "message": "No such device" 00:38:42.356 } 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:42.356 18:37:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:42.356 18:37:43 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:42.356 18:37:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:42.617 18:37:43 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:42.617 18:37:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:42.617 18:37:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:42.617 18:37:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:42.617 18:37:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:42.617 18:37:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:42.617 18:37:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CTYEqN1X1S 00:38:42.617 18:37:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:42.617 18:37:43 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:42.617 18:37:43 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:42.617 18:37:43 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:42.617 18:37:43 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:42.617 18:37:43 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:42.617 18:37:43 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:42.617 18:37:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CTYEqN1X1S 00:38:42.617 18:37:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CTYEqN1X1S 00:38:42.617 18:37:44 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.CTYEqN1X1S 00:38:42.617 18:37:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CTYEqN1X1S 00:38:42.617 18:37:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CTYEqN1X1S 00:38:42.877 18:37:44 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:42.877 18:37:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:43.138 nvme0n1 00:38:43.138 18:37:44 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:43.138 18:37:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:43.138 18:37:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:43.138 18:37:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.138 18:37:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:43.138 18:37:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.398 18:37:44 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:43.398 18:37:44 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:43.398 18:37:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:43.398 18:37:44 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:43.398 18:37:44 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:43.398 18:37:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.398 18:37:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:43.398 18:37:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.659 18:37:44 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:43.659 18:37:44 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:43.659 18:37:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:43.659 18:37:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:43.659 18:37:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.659 18:37:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:43.659 18:37:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.919 18:37:45 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:43.919 18:37:45 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:43.919 18:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:43.919 18:37:45 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:43.919 18:37:45 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:43.919 18:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:44.180 18:37:45 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:44.180 18:37:45 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CTYEqN1X1S 00:38:44.180 18:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CTYEqN1X1S 00:38:44.440 18:37:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.zekyV2RqCN 00:38:44.440 18:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.zekyV2RqCN 00:38:44.440 18:37:45 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:44.440 18:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:44.701 nvme0n1 00:38:44.701 18:37:46 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:44.701 18:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:44.962 18:37:46 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:44.962 "subsystems": [ 00:38:44.962 { 00:38:44.962 "subsystem": "keyring", 00:38:44.962 "config": [ 00:38:44.962 { 00:38:44.962 "method": "keyring_file_add_key", 00:38:44.962 "params": { 00:38:44.962 "name": "key0", 00:38:44.962 "path": "/tmp/tmp.CTYEqN1X1S" 00:38:44.962 } 00:38:44.962 }, 00:38:44.962 { 00:38:44.962 "method": "keyring_file_add_key", 00:38:44.962 "params": { 00:38:44.962 "name": "key1", 00:38:44.962 "path": "/tmp/tmp.zekyV2RqCN" 00:38:44.962 } 00:38:44.962 } 00:38:44.962 ] 00:38:44.962 }, 00:38:44.962 { 00:38:44.962 "subsystem": "iobuf", 00:38:44.962 "config": [ 00:38:44.962 { 00:38:44.962 "method": "iobuf_set_options", 00:38:44.962 "params": { 00:38:44.962 "small_pool_count": 8192, 00:38:44.962 "large_pool_count": 1024, 00:38:44.962 "small_bufsize": 8192, 00:38:44.962 "large_bufsize": 135168, 00:38:44.962 "enable_numa": false 00:38:44.962 } 00:38:44.962 } 00:38:44.962 ] 00:38:44.962 }, 00:38:44.962 { 00:38:44.962 "subsystem": "sock", 00:38:44.962 "config": [ 00:38:44.962 { 00:38:44.962 "method": "sock_set_default_impl", 00:38:44.962 "params": { 00:38:44.962 "impl_name": "posix" 00:38:44.962 } 00:38:44.962 }, 00:38:44.962 { 00:38:44.962 "method": "sock_impl_set_options", 00:38:44.962 "params": { 00:38:44.962 "impl_name": "ssl", 00:38:44.962 "recv_buf_size": 4096, 00:38:44.962 "send_buf_size": 4096, 00:38:44.962 "enable_recv_pipe": true, 00:38:44.962 "enable_quickack": false, 00:38:44.962 "enable_placement_id": 0, 00:38:44.962 "enable_zerocopy_send_server": true, 00:38:44.962 "enable_zerocopy_send_client": false, 00:38:44.962 "zerocopy_threshold": 0, 00:38:44.962 "tls_version": 0, 00:38:44.962 "enable_ktls": false 00:38:44.962 } 00:38:44.962 }, 00:38:44.962 { 00:38:44.962 "method": "sock_impl_set_options", 00:38:44.962 "params": { 00:38:44.962 "impl_name": "posix", 00:38:44.962 "recv_buf_size": 2097152, 00:38:44.962 "send_buf_size": 2097152, 00:38:44.962 "enable_recv_pipe": true, 00:38:44.962 "enable_quickack": false, 00:38:44.962 "enable_placement_id": 0, 00:38:44.962 "enable_zerocopy_send_server": true, 00:38:44.962 "enable_zerocopy_send_client": false, 00:38:44.962 "zerocopy_threshold": 0, 00:38:44.962 "tls_version": 0, 00:38:44.962 "enable_ktls": false 00:38:44.962 } 00:38:44.962 } 00:38:44.962 ] 00:38:44.962 }, 00:38:44.962 { 00:38:44.962 "subsystem": "vmd", 00:38:44.962 "config": [] 00:38:44.962 }, 00:38:44.962 { 00:38:44.962 "subsystem": "accel", 00:38:44.962 "config": [ 00:38:44.962 { 00:38:44.962 "method": "accel_set_options", 00:38:44.962 "params": { 00:38:44.962 "small_cache_size": 128, 00:38:44.962 "large_cache_size": 16, 00:38:44.962 "task_count": 2048, 00:38:44.962 "sequence_count": 2048, 00:38:44.962 "buf_count": 2048 00:38:44.962 } 00:38:44.962 } 00:38:44.962 ] 00:38:44.962 }, 00:38:44.962 { 00:38:44.962 "subsystem": "bdev", 00:38:44.962 "config": [ 00:38:44.962 { 00:38:44.963 "method": "bdev_set_options", 00:38:44.963 "params": { 00:38:44.963 "bdev_io_pool_size": 65535, 00:38:44.963 "bdev_io_cache_size": 256, 00:38:44.963 "bdev_auto_examine": true, 00:38:44.963 "iobuf_small_cache_size": 128, 00:38:44.963 "iobuf_large_cache_size": 16 00:38:44.963 } 00:38:44.963 }, 00:38:44.963 { 00:38:44.963 "method": "bdev_raid_set_options", 00:38:44.963 "params": { 00:38:44.963 "process_window_size_kb": 1024, 00:38:44.963 "process_max_bandwidth_mb_sec": 0 00:38:44.963 } 00:38:44.963 }, 00:38:44.963 { 00:38:44.963 "method": "bdev_iscsi_set_options", 00:38:44.963 "params": { 00:38:44.963 "timeout_sec": 30 00:38:44.963 } 00:38:44.963 }, 00:38:44.963 { 00:38:44.963 "method": "bdev_nvme_set_options", 00:38:44.963 "params": { 00:38:44.963 "action_on_timeout": "none", 00:38:44.963 "timeout_us": 0, 00:38:44.963 "timeout_admin_us": 0, 00:38:44.963 "keep_alive_timeout_ms": 10000, 00:38:44.963 "arbitration_burst": 0, 00:38:44.963 "low_priority_weight": 0, 00:38:44.963 "medium_priority_weight": 0, 00:38:44.963 "high_priority_weight": 0, 00:38:44.963 "nvme_adminq_poll_period_us": 10000, 00:38:44.963 "nvme_ioq_poll_period_us": 0, 00:38:44.963 "io_queue_requests": 512, 00:38:44.963 "delay_cmd_submit": true, 00:38:44.963 "transport_retry_count": 4, 00:38:44.963 "bdev_retry_count": 3, 00:38:44.963 "transport_ack_timeout": 0, 00:38:44.963 "ctrlr_loss_timeout_sec": 0, 00:38:44.963 "reconnect_delay_sec": 0, 00:38:44.963 "fast_io_fail_timeout_sec": 0, 00:38:44.963 "disable_auto_failback": false, 00:38:44.963 "generate_uuids": false, 00:38:44.963 "transport_tos": 0, 00:38:44.963 "nvme_error_stat": false, 00:38:44.963 "rdma_srq_size": 0, 00:38:44.963 "io_path_stat": false, 00:38:44.963 "allow_accel_sequence": false, 00:38:44.963 "rdma_max_cq_size": 0, 00:38:44.963 "rdma_cm_event_timeout_ms": 0, 00:38:44.963 "dhchap_digests": [ 00:38:44.963 "sha256", 00:38:44.963 "sha384", 00:38:44.963 "sha512" 00:38:44.963 ], 00:38:44.963 "dhchap_dhgroups": [ 00:38:44.963 "null", 00:38:44.963 "ffdhe2048", 00:38:44.963 "ffdhe3072", 00:38:44.963 "ffdhe4096", 00:38:44.963 "ffdhe6144", 00:38:44.963 "ffdhe8192" 00:38:44.963 ] 00:38:44.963 } 00:38:44.963 }, 00:38:44.963 { 00:38:44.963 "method": "bdev_nvme_attach_controller", 00:38:44.963 "params": { 00:38:44.963 "name": "nvme0", 00:38:44.963 "trtype": "TCP", 00:38:44.963 "adrfam": "IPv4", 00:38:44.963 "traddr": "127.0.0.1", 00:38:44.963 "trsvcid": "4420", 00:38:44.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:44.963 "prchk_reftag": false, 00:38:44.963 "prchk_guard": false, 00:38:44.963 "ctrlr_loss_timeout_sec": 0, 00:38:44.963 "reconnect_delay_sec": 0, 00:38:44.963 "fast_io_fail_timeout_sec": 0, 00:38:44.963 "psk": "key0", 00:38:44.963 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:44.963 "hdgst": false, 00:38:44.963 "ddgst": false, 00:38:44.963 "multipath": "multipath" 00:38:44.963 } 00:38:44.963 }, 00:38:44.963 { 00:38:44.963 "method": "bdev_nvme_set_hotplug", 00:38:44.963 "params": { 00:38:44.963 "period_us": 100000, 00:38:44.963 "enable": false 00:38:44.963 } 00:38:44.963 }, 00:38:44.963 { 00:38:44.963 "method": "bdev_wait_for_examine" 00:38:44.963 } 00:38:44.963 ] 00:38:44.963 }, 00:38:44.963 { 00:38:44.963 "subsystem": "nbd", 00:38:44.963 "config": [] 00:38:44.963 } 00:38:44.963 ] 00:38:44.963 }' 00:38:44.963 18:37:46 keyring_file -- keyring/file.sh@115 -- # killprocess 2320381 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2320381 ']' 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2320381 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320381 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320381' 00:38:44.963 killing process with pid 2320381 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@973 -- # kill 2320381 00:38:44.963 Received shutdown signal, test time was about 1.000000 seconds 00:38:44.963 00:38:44.963 Latency(us) 00:38:44.963 [2024-11-19T17:37:46.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.963 [2024-11-19T17:37:46.434Z] =================================================================================================================== 00:38:44.963 [2024-11-19T17:37:46.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:44.963 18:37:46 keyring_file -- common/autotest_common.sh@978 -- # wait 2320381 00:38:45.224 18:37:46 keyring_file -- keyring/file.sh@118 -- # bperfpid=2322190 00:38:45.224 18:37:46 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2322190 /var/tmp/bperf.sock 00:38:45.224 18:37:46 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2322190 ']' 00:38:45.224 18:37:46 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:45.224 18:37:46 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:45.224 18:37:46 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:45.224 18:37:46 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:45.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:45.224 18:37:46 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:45.224 18:37:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:45.224 18:37:46 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:45.224 "subsystems": [ 00:38:45.224 { 00:38:45.224 "subsystem": "keyring", 00:38:45.224 "config": [ 00:38:45.224 { 00:38:45.224 "method": "keyring_file_add_key", 00:38:45.224 "params": { 00:38:45.224 "name": "key0", 00:38:45.224 "path": "/tmp/tmp.CTYEqN1X1S" 00:38:45.224 } 00:38:45.224 }, 00:38:45.224 { 00:38:45.224 "method": "keyring_file_add_key", 00:38:45.224 "params": { 00:38:45.224 "name": "key1", 00:38:45.224 "path": "/tmp/tmp.zekyV2RqCN" 00:38:45.224 } 00:38:45.224 } 00:38:45.224 ] 00:38:45.224 }, 00:38:45.224 { 00:38:45.224 "subsystem": "iobuf", 00:38:45.224 "config": [ 00:38:45.224 { 00:38:45.224 "method": "iobuf_set_options", 00:38:45.224 "params": { 00:38:45.224 "small_pool_count": 8192, 00:38:45.224 "large_pool_count": 1024, 00:38:45.224 "small_bufsize": 8192, 00:38:45.224 "large_bufsize": 135168, 00:38:45.224 "enable_numa": false 00:38:45.224 } 00:38:45.224 } 00:38:45.224 ] 00:38:45.224 }, 00:38:45.224 { 00:38:45.224 "subsystem": "sock", 00:38:45.224 "config": [ 00:38:45.224 { 00:38:45.224 "method": "sock_set_default_impl", 00:38:45.224 "params": { 00:38:45.224 "impl_name": "posix" 00:38:45.224 } 00:38:45.224 }, 00:38:45.224 { 00:38:45.224 "method": "sock_impl_set_options", 00:38:45.224 "params": { 00:38:45.224 "impl_name": "ssl", 00:38:45.224 "recv_buf_size": 4096, 00:38:45.224 "send_buf_size": 4096, 00:38:45.224 "enable_recv_pipe": true, 00:38:45.224 "enable_quickack": false, 00:38:45.224 "enable_placement_id": 0, 00:38:45.224 "enable_zerocopy_send_server": true, 00:38:45.224 "enable_zerocopy_send_client": false, 00:38:45.224 "zerocopy_threshold": 0, 00:38:45.224 "tls_version": 0, 00:38:45.224 "enable_ktls": false 00:38:45.224 } 00:38:45.224 }, 00:38:45.224 { 00:38:45.224 "method": "sock_impl_set_options", 00:38:45.224 "params": { 00:38:45.224 "impl_name": "posix", 00:38:45.224 "recv_buf_size": 2097152, 00:38:45.224 "send_buf_size": 2097152, 00:38:45.224 "enable_recv_pipe": true, 00:38:45.224 "enable_quickack": false, 00:38:45.224 "enable_placement_id": 0, 00:38:45.224 "enable_zerocopy_send_server": true, 00:38:45.224 "enable_zerocopy_send_client": false, 00:38:45.224 "zerocopy_threshold": 0, 00:38:45.224 "tls_version": 0, 00:38:45.224 "enable_ktls": false 00:38:45.224 } 00:38:45.224 } 00:38:45.224 ] 00:38:45.224 }, 00:38:45.224 { 00:38:45.224 "subsystem": "vmd", 00:38:45.224 "config": [] 00:38:45.224 }, 00:38:45.224 { 00:38:45.224 "subsystem": "accel", 00:38:45.224 "config": [ 00:38:45.224 { 00:38:45.224 "method": "accel_set_options", 00:38:45.224 "params": { 00:38:45.224 "small_cache_size": 128, 00:38:45.224 "large_cache_size": 16, 00:38:45.224 "task_count": 2048, 00:38:45.224 "sequence_count": 2048, 00:38:45.224 "buf_count": 2048 00:38:45.224 } 00:38:45.224 } 00:38:45.224 ] 00:38:45.224 }, 00:38:45.224 { 00:38:45.224 "subsystem": "bdev", 00:38:45.224 "config": [ 00:38:45.224 { 00:38:45.225 "method": "bdev_set_options", 00:38:45.225 "params": { 00:38:45.225 "bdev_io_pool_size": 65535, 00:38:45.225 "bdev_io_cache_size": 256, 00:38:45.225 "bdev_auto_examine": true, 00:38:45.225 "iobuf_small_cache_size": 128, 00:38:45.225 "iobuf_large_cache_size": 16 00:38:45.225 } 00:38:45.225 }, 00:38:45.225 { 00:38:45.225 "method": "bdev_raid_set_options", 00:38:45.225 "params": { 00:38:45.225 "process_window_size_kb": 1024, 00:38:45.225 "process_max_bandwidth_mb_sec": 0 00:38:45.225 } 00:38:45.225 }, 00:38:45.225 { 00:38:45.225 "method": "bdev_iscsi_set_options", 00:38:45.225 "params": { 00:38:45.225 "timeout_sec": 30 00:38:45.225 } 00:38:45.225 }, 00:38:45.225 { 00:38:45.225 "method": "bdev_nvme_set_options", 00:38:45.225 "params": { 00:38:45.225 "action_on_timeout": "none", 00:38:45.225 "timeout_us": 0, 00:38:45.225 "timeout_admin_us": 0, 00:38:45.225 "keep_alive_timeout_ms": 10000, 00:38:45.225 "arbitration_burst": 0, 00:38:45.225 "low_priority_weight": 0, 00:38:45.225 "medium_priority_weight": 0, 00:38:45.225 "high_priority_weight": 0, 00:38:45.225 "nvme_adminq_poll_period_us": 10000, 00:38:45.225 "nvme_ioq_poll_period_us": 0, 00:38:45.225 "io_queue_requests": 512, 00:38:45.225 "delay_cmd_submit": true, 00:38:45.225 "transport_retry_count": 4, 00:38:45.225 "bdev_retry_count": 3, 00:38:45.225 "transport_ack_timeout": 0, 00:38:45.225 "ctrlr_loss_timeout_sec": 0, 00:38:45.225 "reconnect_delay_sec": 0, 00:38:45.225 "fast_io_fail_timeout_sec": 0, 00:38:45.225 "disable_auto_failback": false, 00:38:45.225 "generate_uuids": false, 00:38:45.225 "transport_tos": 0, 00:38:45.225 "nvme_error_stat": false, 00:38:45.225 "rdma_srq_size": 0, 00:38:45.225 "io_path_stat": false, 00:38:45.225 "allow_accel_sequence": false, 00:38:45.225 "rdma_max_cq_size": 0, 00:38:45.225 "rdma_cm_event_timeout_ms": 0, 00:38:45.225 "dhchap_digests": [ 00:38:45.225 "sha256", 00:38:45.225 "sha384", 00:38:45.225 "sha512" 00:38:45.225 ], 00:38:45.225 "dhchap_dhgroups": [ 00:38:45.225 "null", 00:38:45.225 "ffdhe2048", 00:38:45.225 "ffdhe3072", 00:38:45.225 "ffdhe4096", 00:38:45.225 "ffdhe6144", 00:38:45.225 "ffdhe8192" 00:38:45.225 ] 00:38:45.225 } 00:38:45.225 }, 00:38:45.225 { 00:38:45.225 "method": "bdev_nvme_attach_controller", 00:38:45.225 "params": { 00:38:45.225 "name": "nvme0", 00:38:45.225 "trtype": "TCP", 00:38:45.225 "adrfam": "IPv4", 00:38:45.225 "traddr": "127.0.0.1", 00:38:45.225 "trsvcid": "4420", 00:38:45.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:45.225 "prchk_reftag": false, 00:38:45.225 "prchk_guard": false, 00:38:45.225 "ctrlr_loss_timeout_sec": 0, 00:38:45.225 "reconnect_delay_sec": 0, 00:38:45.225 "fast_io_fail_timeout_sec": 0, 00:38:45.225 "psk": "key0", 00:38:45.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:45.225 "hdgst": false, 00:38:45.225 "ddgst": false, 00:38:45.225 "multipath": "multipath" 00:38:45.225 } 00:38:45.225 }, 00:38:45.225 { 00:38:45.225 "method": "bdev_nvme_set_hotplug", 00:38:45.225 "params": { 00:38:45.225 "period_us": 100000, 00:38:45.225 "enable": false 00:38:45.225 } 00:38:45.225 }, 00:38:45.225 { 00:38:45.225 "method": "bdev_wait_for_examine" 00:38:45.225 } 00:38:45.225 ] 00:38:45.225 }, 00:38:45.225 { 00:38:45.225 "subsystem": "nbd", 00:38:45.225 "config": [] 00:38:45.225 } 00:38:45.225 ] 00:38:45.225 }' 00:38:45.225 [2024-11-19 18:37:46.546746] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:38:45.225 [2024-11-19 18:37:46.546802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322190 ] 00:38:45.225 [2024-11-19 18:37:46.629457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.225 [2024-11-19 18:37:46.657338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:45.485 [2024-11-19 18:37:46.800173] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:46.057 18:37:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:46.057 18:37:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:46.057 18:37:47 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:46.057 18:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.057 18:37:47 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:46.057 18:37:47 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:46.057 18:37:47 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:46.057 18:37:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:46.057 18:37:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:46.057 18:37:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:46.057 18:37:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:46.057 18:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.319 18:37:47 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:46.319 18:37:47 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:46.319 18:37:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:46.319 18:37:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:46.319 18:37:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:46.319 18:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.319 18:37:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:46.579 18:37:47 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:46.579 18:37:47 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:46.579 18:37:47 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:46.579 18:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:46.579 18:37:48 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:46.579 18:37:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:46.579 18:37:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CTYEqN1X1S /tmp/tmp.zekyV2RqCN 00:38:46.579 18:37:48 keyring_file -- keyring/file.sh@20 -- # killprocess 2322190 00:38:46.579 18:37:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2322190 ']' 00:38:46.579 18:37:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2322190 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2322190 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2322190' 00:38:46.840 killing process with pid 2322190 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@973 -- # kill 2322190 00:38:46.840 Received shutdown signal, test time was about 1.000000 seconds 00:38:46.840 00:38:46.840 Latency(us) 00:38:46.840 [2024-11-19T17:37:48.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:46.840 [2024-11-19T17:37:48.311Z] =================================================================================================================== 00:38:46.840 [2024-11-19T17:37:48.311Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@978 -- # wait 2322190 00:38:46.840 18:37:48 keyring_file -- keyring/file.sh@21 -- # killprocess 2320293 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2320293 ']' 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2320293 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320293 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320293' 00:38:46.840 killing process with pid 2320293 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@973 -- # kill 2320293 00:38:46.840 18:37:48 keyring_file -- common/autotest_common.sh@978 -- # wait 2320293 00:38:47.101 00:38:47.101 real 0m12.059s 00:38:47.101 user 0m29.144s 00:38:47.101 sys 0m2.682s 00:38:47.101 18:37:48 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:47.101 18:37:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:47.101 ************************************ 00:38:47.101 END TEST keyring_file 00:38:47.101 ************************************ 00:38:47.101 18:37:48 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:47.101 18:37:48 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:47.101 18:37:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:47.101 18:37:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:47.101 18:37:48 -- common/autotest_common.sh@10 -- # set +x 00:38:47.101 ************************************ 00:38:47.101 START TEST keyring_linux 00:38:47.101 ************************************ 00:38:47.101 18:37:48 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:47.101 Joined session keyring: 718661767 00:38:47.362 * Looking for test storage... 00:38:47.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:47.362 18:37:48 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:47.362 18:37:48 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:47.362 18:37:48 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:47.362 18:37:48 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:47.362 18:37:48 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:47.362 18:37:48 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:47.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.362 --rc genhtml_branch_coverage=1 00:38:47.362 --rc genhtml_function_coverage=1 00:38:47.362 --rc genhtml_legend=1 00:38:47.362 --rc geninfo_all_blocks=1 00:38:47.362 --rc geninfo_unexecuted_blocks=1 00:38:47.362 00:38:47.362 ' 00:38:47.362 18:37:48 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:47.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.362 --rc genhtml_branch_coverage=1 00:38:47.362 --rc genhtml_function_coverage=1 00:38:47.362 --rc genhtml_legend=1 00:38:47.362 --rc geninfo_all_blocks=1 00:38:47.362 --rc geninfo_unexecuted_blocks=1 00:38:47.362 00:38:47.362 ' 00:38:47.362 18:37:48 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:47.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.362 --rc genhtml_branch_coverage=1 00:38:47.362 --rc genhtml_function_coverage=1 00:38:47.362 --rc genhtml_legend=1 00:38:47.362 --rc geninfo_all_blocks=1 00:38:47.362 --rc geninfo_unexecuted_blocks=1 00:38:47.362 00:38:47.362 ' 00:38:47.362 18:37:48 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:47.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.362 --rc genhtml_branch_coverage=1 00:38:47.362 --rc genhtml_function_coverage=1 00:38:47.362 --rc genhtml_legend=1 00:38:47.362 --rc geninfo_all_blocks=1 00:38:47.362 --rc geninfo_unexecuted_blocks=1 00:38:47.362 00:38:47.362 ' 00:38:47.362 18:37:48 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:47.362 18:37:48 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:47.362 18:37:48 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:47.362 18:37:48 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:47.362 18:37:48 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.363 18:37:48 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.363 18:37:48 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.363 18:37:48 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:47.363 18:37:48 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:47.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:47.363 18:37:48 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:47.363 18:37:48 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:47.363 18:37:48 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:47.363 18:37:48 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:47.363 18:37:48 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:47.363 18:37:48 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:47.363 18:37:48 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:47.363 18:37:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:47.363 18:37:48 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:47.363 18:37:48 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:47.363 18:37:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:47.363 18:37:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:47.363 18:37:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:47.363 18:37:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:47.623 18:37:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:47.623 18:37:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:47.623 /tmp/:spdk-test:key0 00:38:47.623 18:37:48 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:47.623 18:37:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:47.623 18:37:48 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:47.623 18:37:48 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:47.623 18:37:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:47.624 18:37:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:47.624 18:37:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:47.624 18:37:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:47.624 18:37:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:47.624 18:37:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:47.624 18:37:48 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:47.624 18:37:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:47.624 18:37:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:47.624 18:37:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:47.624 18:37:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:47.624 /tmp/:spdk-test:key1 00:38:47.624 18:37:48 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2322631 00:38:47.624 18:37:48 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2322631 00:38:47.624 18:37:48 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:47.624 18:37:48 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2322631 ']' 00:38:47.624 18:37:48 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.624 18:37:48 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:47.624 18:37:48 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.624 18:37:48 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:47.624 18:37:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:47.624 [2024-11-19 18:37:48.960285] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:38:47.624 [2024-11-19 18:37:48.960345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322631 ] 00:38:47.624 [2024-11-19 18:37:49.046406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.624 [2024-11-19 18:37:49.082243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.564 18:37:49 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.564 18:37:49 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:48.564 18:37:49 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:48.564 18:37:49 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:48.564 18:37:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:48.564 [2024-11-19 18:37:49.758289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.564 null0 00:38:48.564 [2024-11-19 18:37:49.790344] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:48.564 [2024-11-19 18:37:49.790699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:48.564 18:37:49 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:48.564 18:37:49 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:48.564 87036600 00:38:48.564 18:37:49 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:48.564 1061525540 00:38:48.564 18:37:49 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2322965 00:38:48.564 18:37:49 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2322965 /var/tmp/bperf.sock 00:38:48.564 18:37:49 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:48.564 18:37:49 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2322965 ']' 00:38:48.564 18:37:49 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:48.564 18:37:49 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.564 18:37:49 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:48.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:48.565 18:37:49 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.565 18:37:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:48.565 [2024-11-19 18:37:49.876585] Starting SPDK v25.01-pre git sha1 8d982eda9 / DPDK 24.03.0 initialization... 00:38:48.565 [2024-11-19 18:37:49.876632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322965 ] 00:38:48.565 [2024-11-19 18:37:49.959709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.565 [2024-11-19 18:37:49.989341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.507 18:37:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.507 18:37:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:49.507 18:37:50 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:49.507 18:37:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:49.507 18:37:50 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:49.507 18:37:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:49.767 18:37:51 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:49.767 18:37:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:49.767 [2024-11-19 18:37:51.197715] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:50.028 nvme0n1 00:38:50.028 18:37:51 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:50.028 18:37:51 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:50.028 18:37:51 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:50.028 18:37:51 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:50.028 18:37:51 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:50.028 18:37:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.028 18:37:51 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:50.028 18:37:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:50.028 18:37:51 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:50.028 18:37:51 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:50.028 18:37:51 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.028 18:37:51 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:50.028 18:37:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.289 18:37:51 keyring_linux -- keyring/linux.sh@25 -- # sn=87036600 00:38:50.289 18:37:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:50.289 18:37:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:50.289 18:37:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 87036600 == \8\7\0\3\6\6\0\0 ]] 00:38:50.289 18:37:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 87036600 00:38:50.289 18:37:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:50.289 18:37:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:50.289 Running I/O for 1 seconds... 00:38:51.672 24467.00 IOPS, 95.57 MiB/s 00:38:51.672 Latency(us) 00:38:51.672 [2024-11-19T17:37:53.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.672 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:51.672 nvme0n1 : 1.01 24467.88 95.58 0.00 0.00 5215.87 4014.08 11905.71 00:38:51.672 [2024-11-19T17:37:53.143Z] =================================================================================================================== 00:38:51.672 [2024-11-19T17:37:53.143Z] Total : 24467.88 95.58 0.00 0.00 5215.87 4014.08 11905.71 00:38:51.672 { 00:38:51.672 "results": [ 00:38:51.672 { 00:38:51.672 "job": "nvme0n1", 00:38:51.672 "core_mask": "0x2", 00:38:51.672 "workload": "randread", 00:38:51.672 "status": "finished", 00:38:51.672 "queue_depth": 128, 00:38:51.672 "io_size": 4096, 00:38:51.672 "runtime": 1.005277, 00:38:51.672 "iops": 24467.88298150659, 00:38:51.672 "mibps": 95.57766789651012, 00:38:51.672 "io_failed": 0, 00:38:51.672 "io_timeout": 0, 00:38:51.672 "avg_latency_us": 5215.87363228578, 00:38:51.672 "min_latency_us": 4014.08, 00:38:51.672 "max_latency_us": 11905.706666666667 00:38:51.672 } 00:38:51.672 ], 00:38:51.672 "core_count": 1 00:38:51.672 } 00:38:51.672 18:37:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:51.672 18:37:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:51.672 18:37:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:51.672 18:37:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:51.672 18:37:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:51.672 18:37:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:51.672 18:37:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:51.672 18:37:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.672 18:37:53 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:51.672 18:37:53 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:51.672 18:37:53 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:51.672 18:37:53 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.672 18:37:53 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:51.672 18:37:53 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.672 18:37:53 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:51.672 18:37:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:51.672 18:37:53 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:51.672 18:37:53 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:51.672 18:37:53 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.672 18:37:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:51.933 [2024-11-19 18:37:53.279606] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:51.933 [2024-11-19 18:37:53.280452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x629480 (107): Transport endpoint is not connected 00:38:51.933 [2024-11-19 18:37:53.281447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x629480 (9): Bad file descriptor 00:38:51.934 [2024-11-19 18:37:53.282450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:51.934 [2024-11-19 18:37:53.282458] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:51.934 [2024-11-19 18:37:53.282464] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:51.934 [2024-11-19 18:37:53.282470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:51.934 request: 00:38:51.934 { 00:38:51.934 "name": "nvme0", 00:38:51.934 "trtype": "tcp", 00:38:51.934 "traddr": "127.0.0.1", 00:38:51.934 "adrfam": "ipv4", 00:38:51.934 "trsvcid": "4420", 00:38:51.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:51.934 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:51.934 "prchk_reftag": false, 00:38:51.934 "prchk_guard": false, 00:38:51.934 "hdgst": false, 00:38:51.934 "ddgst": false, 00:38:51.934 "psk": ":spdk-test:key1", 00:38:51.934 "allow_unrecognized_csi": false, 00:38:51.934 "method": "bdev_nvme_attach_controller", 00:38:51.934 "req_id": 1 00:38:51.934 } 00:38:51.934 Got JSON-RPC error response 00:38:51.934 response: 00:38:51.934 { 00:38:51.934 "code": -5, 00:38:51.934 "message": "Input/output error" 00:38:51.934 } 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@33 -- # sn=87036600 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 87036600 00:38:51.934 1 links removed 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@33 -- # sn=1061525540 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1061525540 00:38:51.934 1 links removed 00:38:51.934 18:37:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2322965 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2322965 ']' 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2322965 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2322965 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2322965' 00:38:51.934 killing process with pid 2322965 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 2322965 00:38:51.934 Received shutdown signal, test time was about 1.000000 seconds 00:38:51.934 00:38:51.934 Latency(us) 00:38:51.934 [2024-11-19T17:37:53.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.934 [2024-11-19T17:37:53.405Z] =================================================================================================================== 00:38:51.934 [2024-11-19T17:37:53.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:51.934 18:37:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 2322965 00:38:52.194 18:37:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2322631 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2322631 ']' 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2322631 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2322631 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2322631' 00:38:52.194 killing process with pid 2322631 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 2322631 00:38:52.194 18:37:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 2322631 00:38:52.455 00:38:52.455 real 0m5.187s 00:38:52.455 user 0m9.609s 00:38:52.455 sys 0m1.441s 00:38:52.455 18:37:53 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:52.455 18:37:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:52.455 ************************************ 00:38:52.455 END TEST keyring_linux 00:38:52.455 ************************************ 00:38:52.455 18:37:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:52.455 18:37:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:52.455 18:37:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:52.455 18:37:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:52.455 18:37:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:52.455 18:37:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:52.455 18:37:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:52.455 18:37:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:52.455 18:37:53 -- common/autotest_common.sh@10 -- # set +x 00:38:52.455 18:37:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:52.455 18:37:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:52.455 18:37:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:52.455 18:37:53 -- common/autotest_common.sh@10 -- # set +x 00:39:00.591 INFO: APP EXITING 00:39:00.591 INFO: killing all VMs 00:39:00.591 INFO: killing vhost app 00:39:00.591 WARN: no vhost pid file found 00:39:00.591 INFO: EXIT DONE 00:39:03.902 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:03.902 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:03.902 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:08.113 Cleaning 00:39:08.113 Removing: /var/run/dpdk/spdk0/config 00:39:08.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:08.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:08.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:08.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:08.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:08.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:08.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:08.113 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:08.113 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:08.113 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:08.113 Removing: /var/run/dpdk/spdk1/config 00:39:08.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:08.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:08.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:08.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:08.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:08.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:08.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:08.113 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:08.113 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:08.113 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:08.113 Removing: /var/run/dpdk/spdk2/config 00:39:08.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:08.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:08.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:08.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:08.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:08.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:08.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:08.113 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:08.113 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:08.113 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:08.113 Removing: /var/run/dpdk/spdk3/config 00:39:08.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:08.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:08.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:08.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:08.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:08.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:08.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:08.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:08.113 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:08.113 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:08.113 Removing: /var/run/dpdk/spdk4/config 00:39:08.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:08.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:08.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:08.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:08.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:08.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:08.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:08.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:08.113 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:08.113 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:08.113 Removing: /dev/shm/bdev_svc_trace.1 00:39:08.113 Removing: /dev/shm/nvmf_trace.0 00:39:08.113 Removing: /dev/shm/spdk_tgt_trace.pid1747265 00:39:08.113 Removing: /var/run/dpdk/spdk0 00:39:08.113 Removing: /var/run/dpdk/spdk1 00:39:08.113 Removing: /var/run/dpdk/spdk2 00:39:08.113 Removing: /var/run/dpdk/spdk3 00:39:08.113 Removing: /var/run/dpdk/spdk4 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1745777 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1747265 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1748112 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1749153 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1749489 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1750566 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1750708 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1751029 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1752167 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1752860 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1753215 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1753571 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1753936 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1754264 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1754601 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1754952 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1755321 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1756409 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1759875 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1760213 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1760562 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1760740 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1761119 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1761448 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1761822 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1761890 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1762201 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1762535 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1762599 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1762911 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1763360 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1763712 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1764105 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1768642 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1774046 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1786009 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1786855 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1792392 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1792787 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1798128 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1805215 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1808308 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1820868 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1831649 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1833769 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1834936 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1856170 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1860929 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1917219 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1923657 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1930720 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1938610 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1938612 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1939615 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1940621 00:39:08.114 Removing: /var/run/dpdk/spdk_pid1941631 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1942301 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1942303 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1942644 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1942709 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1942823 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1943872 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1944872 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1945963 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1946642 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1946767 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1947002 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1948815 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1950084 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1959924 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1994354 00:39:08.375 Removing: /var/run/dpdk/spdk_pid1999958 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2001819 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2004007 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2004265 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2004443 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2004704 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2005437 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2007763 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2008868 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2009554 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2012263 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2012971 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2013741 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2018760 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2025455 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2025456 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2025457 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2030144 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2040955 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2045768 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2052947 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2054485 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2056033 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2057874 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2063277 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2068699 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2073740 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2082842 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2082848 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2087907 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2088233 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2088478 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2088999 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2089024 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2095081 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2095690 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2101172 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2104216 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2110930 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2117470 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2127582 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2136047 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2136049 00:39:08.375 Removing: /var/run/dpdk/spdk_pid2159462 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2160147 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2160897 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2161759 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2162768 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2163542 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2164260 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2164950 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2170023 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2170342 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2177517 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2177761 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2184225 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2189333 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2201512 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2202185 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2207239 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2207590 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2212626 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2219525 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2222431 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2234702 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2245396 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2247861 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2248868 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2268475 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2273191 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2276367 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2283995 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2284102 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2290026 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2292231 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2294754 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2296050 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2299019 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2300261 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2310472 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2310980 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2311525 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2314455 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2315123 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2315652 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2320293 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2320381 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2322190 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2322631 00:39:08.636 Removing: /var/run/dpdk/spdk_pid2322965 00:39:08.636 Clean 00:39:08.898 18:38:10 -- common/autotest_common.sh@1453 -- # return 0 00:39:08.898 18:38:10 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:08.898 18:38:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.898 18:38:10 -- common/autotest_common.sh@10 -- # set +x 00:39:08.898 18:38:10 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:08.898 18:38:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.898 18:38:10 -- common/autotest_common.sh@10 -- # set +x 00:39:08.898 18:38:10 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:08.898 18:38:10 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:08.898 18:38:10 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:08.898 18:38:10 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:08.898 18:38:10 -- spdk/autotest.sh@398 -- # hostname 00:39:08.898 18:38:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:09.161 geninfo: WARNING: invalid characters removed from testname! 00:39:35.741 18:38:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:37.650 18:38:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:39.032 18:38:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:40.942 18:38:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:42.326 18:38:43 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:44.236 18:38:45 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:45.619 18:38:47 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:45.619 18:38:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:45.619 18:38:47 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:45.619 18:38:47 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:45.619 18:38:47 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:45.619 18:38:47 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:45.619 + [[ -n 1660711 ]] 00:39:45.619 + sudo kill 1660711 00:39:45.892 [Pipeline] } 00:39:45.907 [Pipeline] // stage 00:39:45.912 [Pipeline] } 00:39:45.927 [Pipeline] // timeout 00:39:45.932 [Pipeline] } 00:39:45.945 [Pipeline] // catchError 00:39:45.951 [Pipeline] } 00:39:45.969 [Pipeline] // wrap 00:39:45.974 [Pipeline] } 00:39:45.987 [Pipeline] // catchError 00:39:45.996 [Pipeline] stage 00:39:45.998 [Pipeline] { (Epilogue) 00:39:46.010 [Pipeline] catchError 00:39:46.012 [Pipeline] { 00:39:46.024 [Pipeline] echo 00:39:46.026 Cleanup processes 00:39:46.032 [Pipeline] sh 00:39:46.323 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:46.323 2335969 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:46.337 [Pipeline] sh 00:39:46.626 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:46.626 ++ grep -v 'sudo pgrep' 00:39:46.626 ++ awk '{print $1}' 00:39:46.626 + sudo kill -9 00:39:46.626 + true 00:39:46.639 [Pipeline] sh 00:39:46.930 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:59.241 [Pipeline] sh 00:39:59.531 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:59.531 Artifacts sizes are good 00:39:59.546 [Pipeline] archiveArtifacts 00:39:59.554 Archiving artifacts 00:39:59.702 [Pipeline] sh 00:39:59.998 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:00.015 [Pipeline] cleanWs 00:40:00.025 [WS-CLEANUP] Deleting project workspace... 00:40:00.025 [WS-CLEANUP] Deferred wipeout is used... 00:40:00.033 [WS-CLEANUP] done 00:40:00.035 [Pipeline] } 00:40:00.052 [Pipeline] // catchError 00:40:00.064 [Pipeline] sh 00:40:00.355 + logger -p user.info -t JENKINS-CI 00:40:00.366 [Pipeline] } 00:40:00.382 [Pipeline] // stage 00:40:00.388 [Pipeline] } 00:40:00.404 [Pipeline] // node 00:40:00.411 [Pipeline] End of Pipeline 00:40:00.450 Finished: SUCCESS